00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2039 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3299 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.130 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.131 The recommended git tool is: git 00:00:00.131 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.214 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.236 > git --version # 'git version 2.39.2' 00:00:00.236 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.255 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.255 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.527 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.540 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.553 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.553 > git config core.sparsecheckout # timeout=10 00:00:05.565 > git read-tree -mu HEAD # timeout=10 00:00:05.581 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.604 Commit message: "packer: Add bios builder" 00:00:05.605 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:05.709 [Pipeline] Start of Pipeline 00:00:05.721 [Pipeline] library 00:00:05.722 Loading library shm_lib@master 00:00:05.723 Library shm_lib@master is cached. Copying from home. 00:00:05.734 [Pipeline] node 00:00:05.758 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.759 [Pipeline] { 00:00:05.768 [Pipeline] catchError 00:00:05.769 [Pipeline] { 00:00:05.778 [Pipeline] wrap 00:00:05.785 [Pipeline] { 00:00:05.791 [Pipeline] stage 00:00:05.793 [Pipeline] { (Prologue) 00:00:05.805 [Pipeline] echo 00:00:05.806 Node: VM-host-WFP1 00:00:05.809 [Pipeline] cleanWs 00:00:05.817 [WS-CLEANUP] Deleting project workspace... 00:00:05.817 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.824 [WS-CLEANUP] done 00:00:05.957 [Pipeline] setCustomBuildProperty 00:00:06.024 [Pipeline] httpRequest 00:00:06.039 [Pipeline] echo 00:00:06.040 Sorcerer 10.211.164.101 is alive 00:00:06.046 [Pipeline] httpRequest 00:00:06.049 HttpMethod: GET 00:00:06.050 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.050 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:06.051 Response Code: HTTP/1.1 200 OK 00:00:06.051 Success: Status code 200 is in the accepted range: 200,404 00:00:06.052 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.431 [Pipeline] sh 00:00:08.728 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.744 [Pipeline] httpRequest 00:00:08.779 [Pipeline] echo 00:00:08.781 Sorcerer 10.211.164.101 is alive 00:00:08.791 [Pipeline] httpRequest 00:00:08.796 HttpMethod: GET 00:00:08.797 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:08.797 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:08.812 Response Code: HTTP/1.1 200 OK 00:00:08.813 Success: Status code 200 is in the accepted range: 200,404 00:00:08.814 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:04.994 [Pipeline] sh 00:01:05.277 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:07.861 [Pipeline] sh 00:01:08.145 + git -C spdk log --oneline -n5 00:01:08.145 dbef7efac test: fix dpdk builds on ubuntu24 00:01:08.145 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:08.145 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:08.145 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:08.145 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:08.165 [Pipeline] writeFile 00:01:08.182 [Pipeline] sh 00:01:08.465 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.478 [Pipeline] sh 00:01:08.763 + cat autorun-spdk.conf 00:01:08.763 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.763 SPDK_TEST_NVME=1 00:01:08.763 SPDK_TEST_FTL=1 00:01:08.763 SPDK_TEST_ISAL=1 00:01:08.763 SPDK_RUN_ASAN=1 00:01:08.763 SPDK_RUN_UBSAN=1 00:01:08.763 SPDK_TEST_XNVME=1 00:01:08.763 SPDK_TEST_NVME_FDP=1 00:01:08.763 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.771 RUN_NIGHTLY=1 00:01:08.772 [Pipeline] } 00:01:08.790 [Pipeline] // stage 00:01:08.806 [Pipeline] stage 00:01:08.808 [Pipeline] { (Run VM) 00:01:08.825 [Pipeline] sh 00:01:09.109 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.110 + echo 'Start stage prepare_nvme.sh' 00:01:09.110 Start stage prepare_nvme.sh 00:01:09.110 + [[ -n 5 ]] 00:01:09.110 + disk_prefix=ex5 00:01:09.110 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:09.110 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:09.110 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:09.110 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.110 ++ SPDK_TEST_NVME=1 00:01:09.110 ++ SPDK_TEST_FTL=1 00:01:09.110 ++ SPDK_TEST_ISAL=1 00:01:09.110 ++ SPDK_RUN_ASAN=1 00:01:09.110 ++ SPDK_RUN_UBSAN=1 00:01:09.110 ++ SPDK_TEST_XNVME=1 00:01:09.110 ++ SPDK_TEST_NVME_FDP=1 00:01:09.110 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.110 ++ RUN_NIGHTLY=1 00:01:09.110 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:09.110 + nvme_files=() 00:01:09.110 + declare -A nvme_files 00:01:09.110 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.110 + nvme_files['nvme.img']=5G 00:01:09.110 + nvme_files['nvme-cmb.img']=5G 00:01:09.110 + nvme_files['nvme-multi0.img']=4G 00:01:09.110 + nvme_files['nvme-multi1.img']=4G 00:01:09.110 + nvme_files['nvme-multi2.img']=4G 00:01:09.110 + nvme_files['nvme-openstack.img']=8G 00:01:09.110 + nvme_files['nvme-zns.img']=5G 00:01:09.110 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.110 + (( SPDK_TEST_FTL == 1 )) 00:01:09.110 + nvme_files["nvme-ftl.img"]=6G 00:01:09.110 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.110 + nvme_files["nvme-fdp.img"]=1G 00:01:09.110 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:09.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:09.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:09.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:09.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:09.110 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.110 + for nvme in "${!nvme_files[@]}" 00:01:09.110 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:09.370 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.370 + for nvme in "${!nvme_files[@]}" 00:01:09.370 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:09.370 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.370 + for nvme in "${!nvme_files[@]}" 00:01:09.370 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:09.370 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:09.370 + for nvme in "${!nvme_files[@]}" 00:01:09.370 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:09.370 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.370 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:09.370 + echo 'End stage prepare_nvme.sh' 00:01:09.370 End stage prepare_nvme.sh 00:01:09.382 [Pipeline] sh 00:01:09.664 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:09.664 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:09.664 00:01:09.664 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:09.664 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:09.664 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:09.664 HELP=0 00:01:09.664 DRY_RUN=0 00:01:09.664 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:09.664 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:09.664 NVME_AUTO_CREATE=0 00:01:09.664 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:09.664 NVME_CMB=,,,, 00:01:09.664 NVME_PMR=,,,, 00:01:09.664 NVME_ZNS=,,,, 00:01:09.664 NVME_MS=true,,,, 00:01:09.664 NVME_FDP=,,,on, 00:01:09.664 SPDK_VAGRANT_DISTRO=fedora38 00:01:09.664 SPDK_VAGRANT_VMCPU=10 00:01:09.664 SPDK_VAGRANT_VMRAM=12288 00:01:09.664 SPDK_VAGRANT_PROVIDER=libvirt 00:01:09.664 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:09.664 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:09.664 SPDK_OPENSTACK_NETWORK=0 00:01:09.664 VAGRANT_PACKAGE_BOX=0 00:01:09.664 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:09.664 FORCE_DISTRO=true 00:01:09.664 VAGRANT_BOX_VERSION= 00:01:09.664 EXTRA_VAGRANTFILES= 00:01:09.664 NIC_MODEL=e1000 00:01:09.664 00:01:09.664 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:09.664 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:12.200 Bringing machine 'default' up with 'libvirt' provider... 00:01:13.137 ==> default: Creating image (snapshot of base box volume). 00:01:13.396 ==> default: Creating domain with the following settings... 00:01:13.397 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1722035344_4385fb8faa8cc8981b44 00:01:13.397 ==> default: -- Domain type: kvm 00:01:13.397 ==> default: -- Cpus: 10 00:01:13.397 ==> default: -- Feature: acpi 00:01:13.397 ==> default: -- Feature: apic 00:01:13.397 ==> default: -- Feature: pae 00:01:13.397 ==> default: -- Memory: 12288M 00:01:13.397 ==> default: -- Memory Backing: hugepages: 00:01:13.397 ==> default: -- Management MAC: 00:01:13.397 ==> default: -- Loader: 00:01:13.397 ==> default: -- Nvram: 00:01:13.397 ==> default: -- Base box: spdk/fedora38 00:01:13.397 ==> default: -- Storage pool: default 00:01:13.397 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1722035344_4385fb8faa8cc8981b44.img (20G) 00:01:13.397 ==> default: -- Volume Cache: default 00:01:13.397 ==> default: -- Kernel: 00:01:13.397 ==> default: -- Initrd: 00:01:13.397 ==> default: -- Graphics Type: vnc 00:01:13.397 ==> default: -- Graphics Port: -1 00:01:13.397 ==> default: -- Graphics IP: 127.0.0.1 00:01:13.397 ==> default: -- Graphics Password: Not defined 00:01:13.397 ==> default: -- Video Type: cirrus 00:01:13.397 ==> default: -- Video VRAM: 9216 00:01:13.397 ==> default: -- Sound Type: 00:01:13.397 ==> default: -- Keymap: en-us 00:01:13.397 ==> default: -- TPM Path: 00:01:13.397 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:13.397 ==> default: -- Command line args: 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:13.397 ==> default: -> value=-drive, 00:01:13.397 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:13.397 ==> default: -> value=-device, 00:01:13.397 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:13.657 ==> default: Creating shared folders metadata... 00:01:13.657 ==> default: Starting domain. 00:01:15.563 ==> default: Waiting for domain to get an IP address... 00:01:33.680 ==> default: Waiting for SSH to become available... 00:01:33.680 ==> default: Configuring and enabling network interfaces... 00:01:38.996 default: SSH address: 192.168.121.203:22 00:01:38.996 default: SSH username: vagrant 00:01:38.996 default: SSH auth method: private key 00:01:41.532 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:49.655 ==> default: Mounting SSHFS shared folder... 00:01:52.192 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.192 ==> default: Checking Mount.. 00:01:53.585 ==> default: Folder Successfully Mounted! 00:01:53.585 ==> default: Running provisioner: file... 00:01:54.963 default: ~/.gitconfig => .gitconfig 00:01:55.223 00:01:55.223 SUCCESS! 00:01:55.223 00:01:55.223 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:55.223 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.223 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:55.223 00:01:55.233 [Pipeline] } 00:01:55.251 [Pipeline] // stage 00:01:55.258 [Pipeline] dir 00:01:55.258 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:55.259 [Pipeline] { 00:01:55.270 [Pipeline] catchError 00:01:55.271 [Pipeline] { 00:01:55.285 [Pipeline] sh 00:01:55.565 + vagrant ssh-config --host vagrant 00:01:55.565 + sed -ne /^Host/,$p 00:01:55.565 + tee ssh_conf 00:01:58.100 Host vagrant 00:01:58.100 HostName 192.168.121.203 00:01:58.100 User vagrant 00:01:58.101 Port 22 00:01:58.101 UserKnownHostsFile /dev/null 00:01:58.101 StrictHostKeyChecking no 00:01:58.101 PasswordAuthentication no 00:01:58.101 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.101 IdentitiesOnly yes 00:01:58.101 LogLevel FATAL 00:01:58.101 ForwardAgent yes 00:01:58.101 ForwardX11 yes 00:01:58.101 00:01:58.115 [Pipeline] withEnv 00:01:58.117 [Pipeline] { 00:01:58.132 [Pipeline] sh 00:01:58.477 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.477 source /etc/os-release 00:01:58.477 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.477 # Minimal, systemd-like check. 00:01:58.477 if [[ -e /.dockerenv ]]; then 00:01:58.477 # Clear garbage from the node's name: 00:01:58.477 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.477 # $HOSTNAME is the actual container id 00:01:58.477 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.477 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.477 # We can assume this is a mount from a host where container is running, 00:01:58.477 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.477 container="$(< /etc/hostname) ($agent)" 00:01:58.477 else 00:01:58.477 # Fallback 00:01:58.477 container=$agent 00:01:58.477 fi 00:01:58.477 fi 00:01:58.477 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.477 00:01:58.749 [Pipeline] } 00:01:58.768 [Pipeline] // withEnv 00:01:58.777 [Pipeline] setCustomBuildProperty 00:01:58.792 [Pipeline] stage 00:01:58.794 [Pipeline] { (Tests) 00:01:58.811 [Pipeline] sh 00:01:59.092 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.366 [Pipeline] sh 00:01:59.648 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.922 [Pipeline] timeout 00:01:59.923 Timeout set to expire in 40 min 00:01:59.925 [Pipeline] { 00:01:59.939 [Pipeline] sh 00:02:00.220 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.787 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:02:00.801 [Pipeline] sh 00:02:01.080 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.353 [Pipeline] sh 00:02:01.635 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.908 [Pipeline] sh 00:02:02.190 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:02.449 ++ readlink -f spdk_repo 00:02:02.449 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:02.449 + [[ -n /home/vagrant/spdk_repo ]] 00:02:02.449 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:02.449 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:02.449 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:02.449 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:02.449 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:02.449 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:02.449 + cd /home/vagrant/spdk_repo 00:02:02.449 + source /etc/os-release 00:02:02.449 ++ NAME='Fedora Linux' 00:02:02.449 ++ VERSION='38 (Cloud Edition)' 00:02:02.449 ++ ID=fedora 00:02:02.449 ++ VERSION_ID=38 00:02:02.449 ++ VERSION_CODENAME= 00:02:02.449 ++ PLATFORM_ID=platform:f38 00:02:02.449 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:02.449 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.449 ++ LOGO=fedora-logo-icon 00:02:02.449 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:02.449 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.449 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:02.449 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.449 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.449 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.449 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:02.449 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.449 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:02.449 ++ SUPPORT_END=2024-05-14 00:02:02.449 ++ VARIANT='Cloud Edition' 00:02:02.449 ++ VARIANT_ID=cloud 00:02:02.449 + uname -a 00:02:02.449 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:02.449 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.708 Hugepages 00:02:02.708 node hugesize free / total 00:02:02.708 node0 1048576kB 0 / 0 00:02:02.708 node0 2048kB 0 / 0 00:02:02.708 00:02:02.708 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.708 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.708 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:02.708 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.708 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.967 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:02.967 + rm -f /tmp/spdk-ld-path 00:02:02.967 + source autorun-spdk.conf 00:02:02.967 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.967 ++ SPDK_TEST_NVME=1 00:02:02.967 ++ SPDK_TEST_FTL=1 00:02:02.967 ++ SPDK_TEST_ISAL=1 00:02:02.967 ++ SPDK_RUN_ASAN=1 00:02:02.967 ++ SPDK_RUN_UBSAN=1 00:02:02.967 ++ SPDK_TEST_XNVME=1 00:02:02.967 ++ SPDK_TEST_NVME_FDP=1 00:02:02.967 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.967 ++ RUN_NIGHTLY=1 00:02:02.967 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.967 + [[ -n '' ]] 00:02:02.967 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.967 + for M in /var/spdk/build-*-manifest.txt 00:02:02.967 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.967 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.967 + for M in /var/spdk/build-*-manifest.txt 00:02:02.967 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.968 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.968 ++ uname 00:02:02.968 + [[ Linux == \L\i\n\u\x ]] 00:02:02.968 + sudo dmesg -T 00:02:02.968 + sudo dmesg --clear 00:02:02.968 + dmesg_pid=5103 00:02:02.968 + [[ Fedora Linux == FreeBSD ]] 00:02:02.968 + sudo dmesg -Tw 00:02:02.968 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.968 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.968 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.968 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.968 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.968 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.968 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.968 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.968 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.968 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.968 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.968 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.968 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.968 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.968 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.968 Test configuration: 00:02:02.968 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.968 SPDK_TEST_NVME=1 00:02:02.968 SPDK_TEST_FTL=1 00:02:02.968 SPDK_TEST_ISAL=1 00:02:02.968 SPDK_RUN_ASAN=1 00:02:02.968 SPDK_RUN_UBSAN=1 00:02:02.968 SPDK_TEST_XNVME=1 00:02:02.968 SPDK_TEST_NVME_FDP=1 00:02:02.968 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.227 RUN_NIGHTLY=1 23:09:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:03.227 23:09:54 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.227 23:09:54 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.227 23:09:54 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.227 23:09:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.227 23:09:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.227 23:09:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.227 23:09:54 -- paths/export.sh@5 -- $ export PATH 00:02:03.227 23:09:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.227 23:09:54 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:03.227 23:09:54 -- common/autobuild_common.sh@438 -- $ date +%s 00:02:03.227 23:09:54 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722035394.XXXXXX 00:02:03.227 23:09:54 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722035394.ILLZoK 00:02:03.227 23:09:54 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:02:03.227 23:09:54 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:02:03.227 23:09:54 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:03.227 23:09:54 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:03.227 23:09:54 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.227 23:09:54 -- common/autobuild_common.sh@454 -- $ get_config_params 00:02:03.227 23:09:54 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:03.228 23:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.228 23:09:54 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:03.228 23:09:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.228 23:09:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.228 23:09:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.228 23:09:54 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.228 Fri Jul 26 11:09:54 PM UTC 2024 00:02:03.228 23:09:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.228 LTS-60-gdbef7efac 00:02:03.228 23:09:54 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:03.228 23:09:54 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:03.228 23:09:54 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:03.228 23:09:54 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:03.228 23:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.228 ************************************ 00:02:03.228 START TEST asan 00:02:03.228 ************************************ 00:02:03.228 using asan 00:02:03.228 23:09:54 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:03.228 00:02:03.228 real 0m0.001s 00:02:03.228 user 0m0.001s 00:02:03.228 sys 0m0.000s 00:02:03.228 23:09:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.228 23:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.228 ************************************ 00:02:03.228 END TEST asan 00:02:03.228 ************************************ 00:02:03.228 23:09:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.228 23:09:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.228 23:09:54 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:03.228 23:09:54 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:03.228 23:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.228 ************************************ 00:02:03.228 START TEST ubsan 00:02:03.228 ************************************ 00:02:03.228 using ubsan 00:02:03.228 23:09:54 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:03.228 00:02:03.228 real 0m0.000s 00:02:03.228 user 0m0.000s 00:02:03.228 sys 0m0.000s 00:02:03.228 23:09:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:03.228 ************************************ 00:02:03.228 END TEST ubsan 00:02:03.228 23:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.228 ************************************ 00:02:03.487 23:09:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.487 23:09:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.487 23:09:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.487 23:09:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:03.487 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.487 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.054 Using 'verbs' RDMA provider 00:02:19.889 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:34.754 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:35.319 Creating mk/config.mk...done. 00:02:35.319 Creating mk/cc.flags.mk...done. 00:02:35.319 Type 'make' to build. 00:02:35.319 23:10:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:35.319 23:10:26 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:35.319 23:10:26 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:35.319 23:10:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.319 ************************************ 00:02:35.319 START TEST make 00:02:35.319 ************************************ 00:02:35.319 23:10:26 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:35.578 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:35.578 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:35.578 meson setup builddir \ 00:02:35.578 -Dwith-libaio=enabled \ 00:02:35.578 -Dwith-liburing=enabled \ 00:02:35.578 -Dwith-libvfn=disabled \ 00:02:35.578 -Dwith-spdk=false && \ 00:02:35.578 meson compile -C builddir && \ 00:02:35.578 cd -) 00:02:35.578 make[1]: Nothing to be done for 'all'. 00:02:37.503 The Meson build system 00:02:37.503 Version: 1.3.1 00:02:37.503 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:37.503 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.503 Build type: native build 00:02:37.503 Project name: xnvme 00:02:37.503 Project version: 0.7.3 00:02:37.504 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.504 C linker for the host machine: cc ld.bfd 2.39-16 00:02:37.504 Host machine cpu family: x86_64 00:02:37.504 Host machine cpu: x86_64 00:02:37.504 Message: host_machine.system: linux 00:02:37.504 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:37.504 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:37.504 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:37.504 Run-time dependency threads found: YES 00:02:37.504 Has header "setupapi.h" : NO 00:02:37.504 Has header "linux/blkzoned.h" : YES 00:02:37.504 Has header "linux/blkzoned.h" : YES (cached) 00:02:37.504 Has header "libaio.h" : YES 00:02:37.504 Library aio found: YES 00:02:37.504 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.504 Run-time dependency liburing found: YES 2.2 00:02:37.504 Dependency libvfn skipped: feature with-libvfn disabled 00:02:37.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:37.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:37.504 Configuring xnvme_config.h using configuration 00:02:37.504 Configuring xnvme.spec using configuration 00:02:37.504 Run-time dependency bash-completion found: YES 2.11 00:02:37.504 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:37.504 Program cp found: YES (/usr/bin/cp) 00:02:37.504 Has header "winsock2.h" : NO 00:02:37.504 Has header "dbghelp.h" : NO 00:02:37.504 Library rpcrt4 found: NO 00:02:37.504 Library rt found: YES 00:02:37.504 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:37.504 Found CMake: /usr/bin/cmake (3.27.7) 00:02:37.504 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:37.504 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:37.504 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:37.504 Build targets in project: 32 00:02:37.504 00:02:37.504 xnvme 0.7.3 00:02:37.504 00:02:37.504 User defined options 00:02:37.504 with-libaio : enabled 00:02:37.504 with-liburing: enabled 00:02:37.504 with-libvfn : disabled 00:02:37.504 with-spdk : false 00:02:37.504 00:02:37.504 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.071 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:38.071 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:38.071 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:38.071 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:38.071 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:38.071 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:38.071 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.071 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.071 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:38.071 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:38.071 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:38.071 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:38.071 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.071 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:38.071 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:38.071 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.071 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:38.071 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:38.329 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:38.329 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:38.329 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:38.329 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:38.329 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:38.329 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:38.329 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:38.329 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:38.329 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:38.329 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:38.329 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:38.329 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:38.329 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:38.329 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:38.329 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:38.329 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:38.329 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:38.329 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:38.329 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:38.329 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:38.329 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:38.329 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:38.329 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:38.329 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:38.329 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:38.329 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:38.329 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:38.329 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:38.330 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:38.330 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:38.330 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:38.330 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:38.330 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:38.330 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:38.594 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:38.594 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:38.594 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:38.594 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:38.594 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:38.594 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:38.594 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:38.594 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:38.594 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:38.594 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:38.594 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:38.594 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:38.594 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:38.594 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:38.594 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:38.594 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:38.594 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:38.594 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:38.594 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:38.594 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.594 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.855 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:38.855 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:38.855 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:38.855 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:38.855 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:38.855 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:38.855 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.855 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:38.855 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:38.855 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:38.855 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.855 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:38.855 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:38.855 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:38.855 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:38.855 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:38.855 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:38.855 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:38.855 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:38.855 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:39.113 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:39.113 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:39.113 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:39.113 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:39.113 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:39.113 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:39.113 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:39.113 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:39.113 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:39.113 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:39.113 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:39.113 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:39.113 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:39.113 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:39.113 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:39.113 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:39.113 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:39.113 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:39.113 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:39.113 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:39.113 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:39.113 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:39.113 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:39.113 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:39.113 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:39.113 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:39.113 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:39.113 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:39.113 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:39.113 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:39.113 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:39.372 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:39.372 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:39.372 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:39.372 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:39.372 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:39.372 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:39.372 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:39.372 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:39.372 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:39.372 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:39.372 [134/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:39.372 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:39.372 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:39.372 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:39.372 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:39.372 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:39.372 [140/203] Linking target lib/libxnvme.so 00:02:39.372 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:39.372 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:39.372 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:39.372 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:39.630 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:39.630 [146/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:39.630 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:39.630 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:39.630 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:39.630 [150/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:39.630 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:39.630 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:39.631 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:39.631 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:39.631 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:39.631 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:39.631 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:39.631 [158/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:39.631 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:39.631 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:39.631 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:39.631 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:39.631 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:39.890 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:39.890 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:39.890 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:39.890 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:39.890 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:39.890 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:39.890 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:39.890 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:39.890 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:39.890 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:39.890 [174/203] Linking static target lib/libxnvme.a 00:02:39.890 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:40.149 [176/203] Linking target tests/xnvme_tests_ioworker 00:02:40.149 [177/203] Linking target tests/xnvme_tests_znd_append 00:02:40.149 [178/203] Linking target tests/xnvme_tests_buf 00:02:40.149 [179/203] Linking target tests/xnvme_tests_scc 00:02:40.149 [180/203] Linking target tests/xnvme_tests_enum 00:02:40.149 [181/203] Linking target tests/xnvme_tests_cli 00:02:40.149 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:40.149 [183/203] Linking target tests/xnvme_tests_lblk 00:02:40.149 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:02:40.149 [185/203] Linking target tests/xnvme_tests_map 00:02:40.149 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:40.149 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:40.149 [188/203] Linking target tests/xnvme_tests_kvs 00:02:40.149 [189/203] Linking target tools/xdd 00:02:40.149 [190/203] Linking target tools/xnvme 00:02:40.149 [191/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:40.149 [192/203] Linking target tools/kvs 00:02:40.149 [193/203] Linking target tools/lblk 00:02:40.149 [194/203] Linking target examples/xnvme_dev 00:02:40.149 [195/203] Linking target tools/zoned 00:02:40.149 [196/203] Linking target examples/xnvme_io_async 00:02:40.149 [197/203] Linking target examples/xnvme_single_sync 00:02:40.149 [198/203] Linking target tools/xnvme_file 00:02:40.149 [199/203] Linking target examples/xnvme_single_async 00:02:40.149 [200/203] Linking target examples/xnvme_enum 00:02:40.149 [201/203] Linking target examples/xnvme_hello 00:02:40.149 [202/203] Linking target examples/zoned_io_async 00:02:40.149 [203/203] Linking target examples/zoned_io_sync 00:02:40.149 INFO: autodetecting backend as ninja 00:02:40.149 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:40.149 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:45.419 The Meson build system 00:02:45.419 Version: 1.3.1 00:02:45.419 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.419 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.419 Build type: native build 00:02:45.419 Program cat found: YES (/usr/bin/cat) 00:02:45.419 Project name: DPDK 00:02:45.419 Project version: 23.11.0 00:02:45.419 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.419 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.419 Host machine cpu family: x86_64 00:02:45.419 Host machine cpu: x86_64 00:02:45.419 Message: ## Building in Developer Mode ## 00:02:45.419 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.419 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.419 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.419 Program python3 found: YES (/usr/bin/python3) 00:02:45.419 Program cat found: YES (/usr/bin/cat) 00:02:45.419 Compiler for C supports arguments -march=native: YES 00:02:45.419 Checking for size of "void *" : 8 00:02:45.419 Checking for size of "void *" : 8 (cached) 00:02:45.419 Library m found: YES 00:02:45.419 Library numa found: YES 00:02:45.419 Has header "numaif.h" : YES 00:02:45.419 Library fdt found: NO 00:02:45.419 Library execinfo found: NO 00:02:45.419 Has header "execinfo.h" : YES 00:02:45.419 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.419 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.419 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.419 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.419 Run-time dependency openssl found: YES 3.0.9 00:02:45.419 Run-time dependency libpcap found: YES 1.10.4 00:02:45.419 Has header "pcap.h" with dependency libpcap: YES 00:02:45.419 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.419 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.419 Compiler for C supports arguments -Wformat: YES 00:02:45.419 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.419 Compiler for C supports arguments -Wformat-security: NO 00:02:45.419 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.419 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.419 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.419 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.419 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.419 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.419 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.419 Compiler for C supports arguments -Wundef: YES 00:02:45.419 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.419 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.419 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.419 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.419 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.419 Program objdump found: YES (/usr/bin/objdump) 00:02:45.419 Compiler for C supports arguments -mavx512f: YES 00:02:45.419 Checking if "AVX512 checking" compiles: YES 00:02:45.419 Fetching value of define "__SSE4_2__" : 1 00:02:45.419 Fetching value of define "__AES__" : 1 00:02:45.419 Fetching value of define "__AVX__" : 1 00:02:45.419 Fetching value of define "__AVX2__" : 1 00:02:45.419 Fetching value of define "__AVX512BW__" : 1 00:02:45.419 Fetching value of define "__AVX512CD__" : 1 00:02:45.419 Fetching value of define "__AVX512DQ__" : 1 00:02:45.419 Fetching value of define "__AVX512F__" : 1 00:02:45.419 Fetching value of define "__AVX512VL__" : 1 00:02:45.419 Fetching value of define "__PCLMUL__" : 1 00:02:45.419 Fetching value of define "__RDRND__" : 1 00:02:45.419 Fetching value of define "__RDSEED__" : 1 00:02:45.419 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.419 Fetching value of define "__znver1__" : (undefined) 00:02:45.419 Fetching value of define "__znver2__" : (undefined) 00:02:45.419 Fetching value of define "__znver3__" : (undefined) 00:02:45.419 Fetching value of define "__znver4__" : (undefined) 00:02:45.419 Library asan found: YES 00:02:45.419 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.419 Message: lib/log: Defining dependency "log" 00:02:45.420 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.420 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.420 Library rt found: YES 00:02:45.420 Checking for function "getentropy" : NO 00:02:45.420 Message: lib/eal: Defining dependency "eal" 00:02:45.420 Message: lib/ring: Defining dependency "ring" 00:02:45.420 Message: lib/rcu: Defining dependency "rcu" 00:02:45.420 Message: lib/mempool: Defining dependency "mempool" 00:02:45.420 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.420 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.420 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.420 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.420 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.420 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.420 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:45.420 Compiler for C supports arguments -mpclmul: YES 00:02:45.420 Compiler for C supports arguments -maes: YES 00:02:45.420 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.420 Compiler for C supports arguments -mavx512bw: YES 00:02:45.420 Compiler for C supports arguments -mavx512dq: YES 00:02:45.420 Compiler for C supports arguments -mavx512vl: YES 00:02:45.420 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.420 Compiler for C supports arguments -mavx2: YES 00:02:45.420 Compiler for C supports arguments -mavx: YES 00:02:45.420 Message: lib/net: Defining dependency "net" 00:02:45.420 Message: lib/meter: Defining dependency "meter" 00:02:45.420 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.420 Message: lib/pci: Defining dependency "pci" 00:02:45.420 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.420 Message: lib/hash: Defining dependency "hash" 00:02:45.420 Message: lib/timer: Defining dependency "timer" 00:02:45.420 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.420 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.420 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.420 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.420 Message: lib/power: Defining dependency "power" 00:02:45.420 Message: lib/reorder: Defining dependency "reorder" 00:02:45.420 Message: lib/security: Defining dependency "security" 00:02:45.420 Has header "linux/userfaultfd.h" : YES 00:02:45.420 Has header "linux/vduse.h" : YES 00:02:45.420 Message: lib/vhost: Defining dependency "vhost" 00:02:45.420 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.420 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.420 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.420 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.420 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.420 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.420 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.420 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.420 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.420 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.420 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.420 Configuring doxy-api-html.conf using configuration 00:02:45.420 Configuring doxy-api-man.conf using configuration 00:02:45.420 Program mandb found: YES (/usr/bin/mandb) 00:02:45.420 Program sphinx-build found: NO 00:02:45.420 Configuring rte_build_config.h using configuration 00:02:45.420 Message: 00:02:45.420 ================= 00:02:45.420 Applications Enabled 00:02:45.420 ================= 00:02:45.420 00:02:45.420 apps: 00:02:45.420 00:02:45.420 00:02:45.420 Message: 00:02:45.420 ================= 00:02:45.420 Libraries Enabled 00:02:45.420 ================= 00:02:45.420 00:02:45.420 libs: 00:02:45.420 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.420 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.420 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.420 00:02:45.420 Message: 00:02:45.420 =============== 00:02:45.420 Drivers Enabled 00:02:45.420 =============== 00:02:45.420 00:02:45.420 common: 00:02:45.420 00:02:45.420 bus: 00:02:45.420 pci, vdev, 00:02:45.420 mempool: 00:02:45.420 ring, 00:02:45.420 dma: 00:02:45.420 00:02:45.420 net: 00:02:45.420 00:02:45.420 crypto: 00:02:45.420 00:02:45.420 compress: 00:02:45.420 00:02:45.420 vdpa: 00:02:45.420 00:02:45.420 00:02:45.420 Message: 00:02:45.420 ================= 00:02:45.420 Content Skipped 00:02:45.420 ================= 00:02:45.420 00:02:45.420 apps: 00:02:45.420 dumpcap: explicitly disabled via build config 00:02:45.420 graph: explicitly disabled via build config 00:02:45.420 pdump: explicitly disabled via build config 00:02:45.420 proc-info: explicitly disabled via build config 00:02:45.420 test-acl: explicitly disabled via build config 00:02:45.420 test-bbdev: explicitly disabled via build config 00:02:45.420 test-cmdline: explicitly disabled via build config 00:02:45.420 test-compress-perf: explicitly disabled via build config 00:02:45.420 test-crypto-perf: explicitly disabled via build config 00:02:45.420 test-dma-perf: explicitly disabled via build config 00:02:45.420 test-eventdev: explicitly disabled via build config 00:02:45.420 test-fib: explicitly disabled via build config 00:02:45.420 test-flow-perf: explicitly disabled via build config 00:02:45.420 test-gpudev: explicitly disabled via build config 00:02:45.420 test-mldev: explicitly disabled via build config 00:02:45.420 test-pipeline: explicitly disabled via build config 00:02:45.420 test-pmd: explicitly disabled via build config 00:02:45.420 test-regex: explicitly disabled via build config 00:02:45.420 test-sad: explicitly disabled via build config 00:02:45.420 test-security-perf: explicitly disabled via build config 00:02:45.420 00:02:45.420 libs: 00:02:45.420 metrics: explicitly disabled via build config 00:02:45.420 acl: explicitly disabled via build config 00:02:45.420 bbdev: explicitly disabled via build config 00:02:45.420 bitratestats: explicitly disabled via build config 00:02:45.420 bpf: explicitly disabled via build config 00:02:45.420 cfgfile: explicitly disabled via build config 00:02:45.420 distributor: explicitly disabled via build config 00:02:45.420 efd: explicitly disabled via build config 00:02:45.420 eventdev: explicitly disabled via build config 00:02:45.420 dispatcher: explicitly disabled via build config 00:02:45.420 gpudev: explicitly disabled via build config 00:02:45.420 gro: explicitly disabled via build config 00:02:45.420 gso: explicitly disabled via build config 00:02:45.420 ip_frag: explicitly disabled via build config 00:02:45.420 jobstats: explicitly disabled via build config 00:02:45.420 latencystats: explicitly disabled via build config 00:02:45.420 lpm: explicitly disabled via build config 00:02:45.420 member: explicitly disabled via build config 00:02:45.420 pcapng: explicitly disabled via build config 00:02:45.420 rawdev: explicitly disabled via build config 00:02:45.420 regexdev: explicitly disabled via build config 00:02:45.420 mldev: explicitly disabled via build config 00:02:45.420 rib: explicitly disabled via build config 00:02:45.420 sched: explicitly disabled via build config 00:02:45.420 stack: explicitly disabled via build config 00:02:45.420 ipsec: explicitly disabled via build config 00:02:45.420 pdcp: explicitly disabled via build config 00:02:45.420 fib: explicitly disabled via build config 00:02:45.420 port: explicitly disabled via build config 00:02:45.420 pdump: explicitly disabled via build config 00:02:45.420 table: explicitly disabled via build config 00:02:45.420 pipeline: explicitly disabled via build config 00:02:45.420 graph: explicitly disabled via build config 00:02:45.420 node: explicitly disabled via build config 00:02:45.420 00:02:45.420 drivers: 00:02:45.420 common/cpt: not in enabled drivers build config 00:02:45.420 common/dpaax: not in enabled drivers build config 00:02:45.420 common/iavf: not in enabled drivers build config 00:02:45.420 common/idpf: not in enabled drivers build config 00:02:45.420 common/mvep: not in enabled drivers build config 00:02:45.420 common/octeontx: not in enabled drivers build config 00:02:45.420 bus/auxiliary: not in enabled drivers build config 00:02:45.420 bus/cdx: not in enabled drivers build config 00:02:45.420 bus/dpaa: not in enabled drivers build config 00:02:45.420 bus/fslmc: not in enabled drivers build config 00:02:45.420 bus/ifpga: not in enabled drivers build config 00:02:45.420 bus/platform: not in enabled drivers build config 00:02:45.420 bus/vmbus: not in enabled drivers build config 00:02:45.420 common/cnxk: not in enabled drivers build config 00:02:45.420 common/mlx5: not in enabled drivers build config 00:02:45.420 common/nfp: not in enabled drivers build config 00:02:45.420 common/qat: not in enabled drivers build config 00:02:45.420 common/sfc_efx: not in enabled drivers build config 00:02:45.420 mempool/bucket: not in enabled drivers build config 00:02:45.420 mempool/cnxk: not in enabled drivers build config 00:02:45.420 mempool/dpaa: not in enabled drivers build config 00:02:45.420 mempool/dpaa2: not in enabled drivers build config 00:02:45.420 mempool/octeontx: not in enabled drivers build config 00:02:45.420 mempool/stack: not in enabled drivers build config 00:02:45.420 dma/cnxk: not in enabled drivers build config 00:02:45.420 dma/dpaa: not in enabled drivers build config 00:02:45.420 dma/dpaa2: not in enabled drivers build config 00:02:45.420 dma/hisilicon: not in enabled drivers build config 00:02:45.420 dma/idxd: not in enabled drivers build config 00:02:45.420 dma/ioat: not in enabled drivers build config 00:02:45.420 dma/skeleton: not in enabled drivers build config 00:02:45.420 net/af_packet: not in enabled drivers build config 00:02:45.420 net/af_xdp: not in enabled drivers build config 00:02:45.420 net/ark: not in enabled drivers build config 00:02:45.420 net/atlantic: not in enabled drivers build config 00:02:45.420 net/avp: not in enabled drivers build config 00:02:45.421 net/axgbe: not in enabled drivers build config 00:02:45.421 net/bnx2x: not in enabled drivers build config 00:02:45.421 net/bnxt: not in enabled drivers build config 00:02:45.421 net/bonding: not in enabled drivers build config 00:02:45.421 net/cnxk: not in enabled drivers build config 00:02:45.421 net/cpfl: not in enabled drivers build config 00:02:45.421 net/cxgbe: not in enabled drivers build config 00:02:45.421 net/dpaa: not in enabled drivers build config 00:02:45.421 net/dpaa2: not in enabled drivers build config 00:02:45.421 net/e1000: not in enabled drivers build config 00:02:45.421 net/ena: not in enabled drivers build config 00:02:45.421 net/enetc: not in enabled drivers build config 00:02:45.421 net/enetfec: not in enabled drivers build config 00:02:45.421 net/enic: not in enabled drivers build config 00:02:45.421 net/failsafe: not in enabled drivers build config 00:02:45.421 net/fm10k: not in enabled drivers build config 00:02:45.421 net/gve: not in enabled drivers build config 00:02:45.421 net/hinic: not in enabled drivers build config 00:02:45.421 net/hns3: not in enabled drivers build config 00:02:45.421 net/i40e: not in enabled drivers build config 00:02:45.421 net/iavf: not in enabled drivers build config 00:02:45.421 net/ice: not in enabled drivers build config 00:02:45.421 net/idpf: not in enabled drivers build config 00:02:45.421 net/igc: not in enabled drivers build config 00:02:45.421 net/ionic: not in enabled drivers build config 00:02:45.421 net/ipn3ke: not in enabled drivers build config 00:02:45.421 net/ixgbe: not in enabled drivers build config 00:02:45.421 net/mana: not in enabled drivers build config 00:02:45.421 net/memif: not in enabled drivers build config 00:02:45.421 net/mlx4: not in enabled drivers build config 00:02:45.421 net/mlx5: not in enabled drivers build config 00:02:45.421 net/mvneta: not in enabled drivers build config 00:02:45.421 net/mvpp2: not in enabled drivers build config 00:02:45.421 net/netvsc: not in enabled drivers build config 00:02:45.421 net/nfb: not in enabled drivers build config 00:02:45.421 net/nfp: not in enabled drivers build config 00:02:45.421 net/ngbe: not in enabled drivers build config 00:02:45.421 net/null: not in enabled drivers build config 00:02:45.421 net/octeontx: not in enabled drivers build config 00:02:45.421 net/octeon_ep: not in enabled drivers build config 00:02:45.421 net/pcap: not in enabled drivers build config 00:02:45.421 net/pfe: not in enabled drivers build config 00:02:45.421 net/qede: not in enabled drivers build config 00:02:45.421 net/ring: not in enabled drivers build config 00:02:45.421 net/sfc: not in enabled drivers build config 00:02:45.421 net/softnic: not in enabled drivers build config 00:02:45.421 net/tap: not in enabled drivers build config 00:02:45.421 net/thunderx: not in enabled drivers build config 00:02:45.421 net/txgbe: not in enabled drivers build config 00:02:45.421 net/vdev_netvsc: not in enabled drivers build config 00:02:45.421 net/vhost: not in enabled drivers build config 00:02:45.421 net/virtio: not in enabled drivers build config 00:02:45.421 net/vmxnet3: not in enabled drivers build config 00:02:45.421 raw/*: missing internal dependency, "rawdev" 00:02:45.421 crypto/armv8: not in enabled drivers build config 00:02:45.421 crypto/bcmfs: not in enabled drivers build config 00:02:45.421 crypto/caam_jr: not in enabled drivers build config 00:02:45.421 crypto/ccp: not in enabled drivers build config 00:02:45.421 crypto/cnxk: not in enabled drivers build config 00:02:45.421 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.421 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.421 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.421 crypto/mlx5: not in enabled drivers build config 00:02:45.421 crypto/mvsam: not in enabled drivers build config 00:02:45.421 crypto/nitrox: not in enabled drivers build config 00:02:45.421 crypto/null: not in enabled drivers build config 00:02:45.421 crypto/octeontx: not in enabled drivers build config 00:02:45.421 crypto/openssl: not in enabled drivers build config 00:02:45.421 crypto/scheduler: not in enabled drivers build config 00:02:45.421 crypto/uadk: not in enabled drivers build config 00:02:45.421 crypto/virtio: not in enabled drivers build config 00:02:45.421 compress/isal: not in enabled drivers build config 00:02:45.421 compress/mlx5: not in enabled drivers build config 00:02:45.421 compress/octeontx: not in enabled drivers build config 00:02:45.421 compress/zlib: not in enabled drivers build config 00:02:45.421 regex/*: missing internal dependency, "regexdev" 00:02:45.421 ml/*: missing internal dependency, "mldev" 00:02:45.421 vdpa/ifc: not in enabled drivers build config 00:02:45.421 vdpa/mlx5: not in enabled drivers build config 00:02:45.421 vdpa/nfp: not in enabled drivers build config 00:02:45.421 vdpa/sfc: not in enabled drivers build config 00:02:45.421 event/*: missing internal dependency, "eventdev" 00:02:45.421 baseband/*: missing internal dependency, "bbdev" 00:02:45.421 gpu/*: missing internal dependency, "gpudev" 00:02:45.421 00:02:45.421 00:02:45.421 Build targets in project: 85 00:02:45.421 00:02:45.421 DPDK 23.11.0 00:02:45.421 00:02:45.421 User defined options 00:02:45.421 buildtype : debug 00:02:45.421 default_library : shared 00:02:45.421 libdir : lib 00:02:45.421 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.421 b_sanitize : address 00:02:45.421 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:45.421 c_link_args : 00:02:45.421 cpu_instruction_set: native 00:02:45.421 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.421 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.421 enable_docs : false 00:02:45.421 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.421 enable_kmods : false 00:02:45.421 tests : false 00:02:45.421 00:02:45.421 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.421 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:45.421 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.421 [2/265] Linking static target lib/librte_kvargs.a 00:02:45.421 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.421 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:45.421 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.421 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.421 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.421 [8/265] Linking static target lib/librte_log.a 00:02:45.421 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.421 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.680 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.680 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.680 [13/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.680 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.680 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.938 [16/265] Linking static target lib/librte_telemetry.a 00:02:45.938 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.938 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.938 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.197 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.197 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.197 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.197 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.197 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.197 [25/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.455 [26/265] Linking target lib/librte_log.so.24.0 00:02:46.455 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.455 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.455 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.713 [30/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:46.713 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.713 [32/265] Linking target lib/librte_kvargs.so.24.0 00:02:46.713 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.713 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.713 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.713 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.713 [37/265] Linking target lib/librte_telemetry.so.24.0 00:02:46.713 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.971 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.971 [40/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:46.971 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.971 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.971 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.971 [44/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:47.229 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.229 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.229 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.229 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.488 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.488 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.488 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.488 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.488 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.488 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.746 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.746 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.746 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.746 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.746 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.004 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.004 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.004 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.004 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.004 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.004 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.262 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.262 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.262 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.519 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.519 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.519 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.519 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.519 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.519 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.519 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.519 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.520 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.777 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.777 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.777 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.777 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.072 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.072 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.332 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.332 [85/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.332 [86/265] Linking static target lib/librte_eal.a 00:02:49.332 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.332 [88/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.332 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.332 [90/265] Linking static target lib/librte_ring.a 00:02:49.591 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.591 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.591 [93/265] Linking static target lib/librte_mempool.a 00:02:49.591 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.591 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.591 [96/265] Linking static target lib/librte_rcu.a 00:02:49.849 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.849 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.849 [99/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.849 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.107 [101/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.107 [102/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.107 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.365 [104/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.365 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.365 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.365 [107/265] Linking static target lib/librte_net.a 00:02:50.365 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.365 [109/265] Linking static target lib/librte_meter.a 00:02:50.365 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.365 [111/265] Linking static target lib/librte_mbuf.a 00:02:50.932 [112/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.932 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.932 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.932 [115/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.932 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.932 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.932 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:51.498 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.755 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.755 [121/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.755 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:52.011 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.011 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:52.011 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.011 [126/265] Linking static target lib/librte_pci.a 00:02:52.269 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.269 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:52.269 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.269 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.269 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.269 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.527 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.527 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.527 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.527 [136/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.527 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:52.527 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.527 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.527 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.527 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.785 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.785 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.785 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.785 [145/265] Linking static target lib/librte_cmdline.a 00:02:53.043 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.043 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.301 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.301 [149/265] Linking static target lib/librte_timer.a 00:02:53.301 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.301 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.559 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.559 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.559 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.559 [155/265] Linking static target lib/librte_compressdev.a 00:02:53.559 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.818 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.818 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.818 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:53.818 [160/265] Linking static target lib/librte_hash.a 00:02:53.818 [161/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.818 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.818 [163/265] Linking static target lib/librte_ethdev.a 00:02:54.076 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.076 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.076 [166/265] Linking static target lib/librte_dmadev.a 00:02:54.076 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.334 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.334 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.334 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.334 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.334 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.593 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.593 [174/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.593 [175/265] Linking static target lib/librte_cryptodev.a 00:02:54.593 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:54.593 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.851 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.851 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.851 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.851 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.109 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.109 [183/265] Linking static target lib/librte_power.a 00:02:55.367 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.367 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.367 [186/265] Linking static target lib/librte_reorder.a 00:02:55.367 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.367 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.367 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.367 [190/265] Linking static target lib/librte_security.a 00:02:55.933 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.933 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.192 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.192 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.192 [195/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.192 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.451 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.451 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.710 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.710 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.969 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.969 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.969 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.969 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.969 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.969 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.228 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.228 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.228 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.228 [210/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.228 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.228 [212/265] Linking static target drivers/librte_bus_pci.a 00:02:57.487 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.487 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.487 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.487 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:57.487 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.487 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.746 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.746 [220/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.746 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.746 [222/265] Linking static target drivers/librte_mempool_ring.a 00:02:57.746 [223/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.006 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.574 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.768 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.768 [227/265] Linking static target lib/librte_vhost.a 00:03:02.768 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.768 [229/265] Linking target lib/librte_eal.so.24.0 00:03:02.768 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:02.768 [231/265] Linking target lib/librte_ring.so.24.0 00:03:02.768 [232/265] Linking target lib/librte_timer.so.24.0 00:03:02.768 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:02.768 [234/265] Linking target lib/librte_meter.so.24.0 00:03:02.768 [235/265] Linking target lib/librte_pci.so.24.0 00:03:02.768 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:02.768 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:02.768 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:02.768 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:02.768 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:02.768 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:02.768 [242/265] Linking target lib/librte_rcu.so.24.0 00:03:02.768 [243/265] Linking target lib/librte_mempool.so.24.0 00:03:03.027 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:03.027 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:03.027 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:03.027 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:03.027 [248/265] Linking target lib/librte_mbuf.so.24.0 00:03:03.287 [249/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.287 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:03.287 [251/265] Linking target lib/librte_compressdev.so.24.0 00:03:03.287 [252/265] Linking target lib/librte_net.so.24.0 00:03:03.287 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:03.287 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:03:03.546 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:03.546 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:03.546 [257/265] Linking target lib/librte_cmdline.so.24.0 00:03:03.546 [258/265] Linking target lib/librte_hash.so.24.0 00:03:03.546 [259/265] Linking target lib/librte_security.so.24.0 00:03:03.546 [260/265] Linking target lib/librte_ethdev.so.24.0 00:03:03.546 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:03.805 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:03.805 [263/265] Linking target lib/librte_power.so.24.0 00:03:04.743 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.743 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:04.743 INFO: autodetecting backend as ninja 00:03:04.743 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:05.680 CC lib/log/log.o 00:03:05.680 CC lib/log/log_deprecated.o 00:03:05.680 CC lib/log/log_flags.o 00:03:05.680 CC lib/ut/ut.o 00:03:05.680 CC lib/ut_mock/mock.o 00:03:05.939 LIB libspdk_ut_mock.a 00:03:05.939 LIB libspdk_ut.a 00:03:05.939 LIB libspdk_log.a 00:03:05.939 SO libspdk_ut_mock.so.5.0 00:03:05.939 SO libspdk_ut.so.1.0 00:03:05.939 SO libspdk_log.so.6.1 00:03:05.939 SYMLINK libspdk_ut_mock.so 00:03:06.198 SYMLINK libspdk_ut.so 00:03:06.198 SYMLINK libspdk_log.so 00:03:06.198 CC lib/util/base64.o 00:03:06.198 CC lib/dma/dma.o 00:03:06.198 CC lib/util/bit_array.o 00:03:06.198 CC lib/util/crc16.o 00:03:06.198 CC lib/util/cpuset.o 00:03:06.198 CC lib/util/crc32c.o 00:03:06.198 CC lib/util/crc32.o 00:03:06.198 CXX lib/trace_parser/trace.o 00:03:06.458 CC lib/ioat/ioat.o 00:03:06.458 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.458 CC lib/util/crc32_ieee.o 00:03:06.458 CC lib/util/crc64.o 00:03:06.458 CC lib/vfio_user/host/vfio_user.o 00:03:06.458 LIB libspdk_dma.a 00:03:06.458 SO libspdk_dma.so.3.0 00:03:06.458 CC lib/util/dif.o 00:03:06.458 CC lib/util/fd.o 00:03:06.458 CC lib/util/file.o 00:03:06.458 CC lib/util/hexlify.o 00:03:06.458 CC lib/util/iov.o 00:03:06.458 SYMLINK libspdk_dma.so 00:03:06.458 CC lib/util/math.o 00:03:06.717 LIB libspdk_ioat.a 00:03:06.717 CC lib/util/pipe.o 00:03:06.717 CC lib/util/strerror_tls.o 00:03:06.717 SO libspdk_ioat.so.6.0 00:03:06.717 CC lib/util/string.o 00:03:06.717 CC lib/util/uuid.o 00:03:06.717 LIB libspdk_vfio_user.a 00:03:06.717 CC lib/util/fd_group.o 00:03:06.717 SYMLINK libspdk_ioat.so 00:03:06.717 SO libspdk_vfio_user.so.4.0 00:03:06.717 CC lib/util/xor.o 00:03:06.717 CC lib/util/zipf.o 00:03:06.717 SYMLINK libspdk_vfio_user.so 00:03:07.286 LIB libspdk_util.a 00:03:07.286 SO libspdk_util.so.8.0 00:03:07.286 LIB libspdk_trace_parser.a 00:03:07.286 SO libspdk_trace_parser.so.4.0 00:03:07.545 SYMLINK libspdk_util.so 00:03:07.545 SYMLINK libspdk_trace_parser.so 00:03:07.545 CC lib/env_dpdk/memory.o 00:03:07.545 CC lib/env_dpdk/env.o 00:03:07.545 CC lib/env_dpdk/init.o 00:03:07.545 CC lib/env_dpdk/threads.o 00:03:07.545 CC lib/env_dpdk/pci.o 00:03:07.545 CC lib/conf/conf.o 00:03:07.545 CC lib/idxd/idxd.o 00:03:07.545 CC lib/json/json_parse.o 00:03:07.545 CC lib/vmd/vmd.o 00:03:07.545 CC lib/rdma/common.o 00:03:07.803 CC lib/env_dpdk/pci_ioat.o 00:03:07.803 LIB libspdk_conf.a 00:03:07.803 CC lib/json/json_util.o 00:03:07.803 SO libspdk_conf.so.5.0 00:03:07.803 CC lib/idxd/idxd_user.o 00:03:07.803 SYMLINK libspdk_conf.so 00:03:07.803 CC lib/idxd/idxd_kernel.o 00:03:07.803 CC lib/rdma/rdma_verbs.o 00:03:08.062 CC lib/env_dpdk/pci_virtio.o 00:03:08.062 CC lib/vmd/led.o 00:03:08.062 CC lib/json/json_write.o 00:03:08.062 CC lib/env_dpdk/pci_vmd.o 00:03:08.062 CC lib/env_dpdk/pci_idxd.o 00:03:08.062 LIB libspdk_rdma.a 00:03:08.062 CC lib/env_dpdk/pci_event.o 00:03:08.062 CC lib/env_dpdk/sigbus_handler.o 00:03:08.062 SO libspdk_rdma.so.5.0 00:03:08.321 LIB libspdk_idxd.a 00:03:08.321 CC lib/env_dpdk/pci_dpdk.o 00:03:08.321 SYMLINK libspdk_rdma.so 00:03:08.321 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.321 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.321 SO libspdk_idxd.so.11.0 00:03:08.321 SYMLINK libspdk_idxd.so 00:03:08.321 LIB libspdk_vmd.a 00:03:08.321 LIB libspdk_json.a 00:03:08.321 SO libspdk_vmd.so.5.0 00:03:08.321 SO libspdk_json.so.5.1 00:03:08.578 SYMLINK libspdk_vmd.so 00:03:08.578 SYMLINK libspdk_json.so 00:03:08.835 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.835 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.835 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.835 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:09.094 LIB libspdk_jsonrpc.a 00:03:09.094 SO libspdk_jsonrpc.so.5.1 00:03:09.094 LIB libspdk_env_dpdk.a 00:03:09.094 SYMLINK libspdk_jsonrpc.so 00:03:09.352 SO libspdk_env_dpdk.so.13.0 00:03:09.352 SYMLINK libspdk_env_dpdk.so 00:03:09.610 CC lib/rpc/rpc.o 00:03:09.610 LIB libspdk_rpc.a 00:03:09.869 SO libspdk_rpc.so.5.0 00:03:09.869 SYMLINK libspdk_rpc.so 00:03:10.127 CC lib/sock/sock.o 00:03:10.127 CC lib/trace/trace.o 00:03:10.127 CC lib/notify/notify.o 00:03:10.127 CC lib/sock/sock_rpc.o 00:03:10.127 CC lib/trace/trace_rpc.o 00:03:10.127 CC lib/notify/notify_rpc.o 00:03:10.127 CC lib/trace/trace_flags.o 00:03:10.386 LIB libspdk_notify.a 00:03:10.386 SO libspdk_notify.so.5.0 00:03:10.386 LIB libspdk_trace.a 00:03:10.386 SYMLINK libspdk_notify.so 00:03:10.386 SO libspdk_trace.so.9.0 00:03:10.386 SYMLINK libspdk_trace.so 00:03:10.386 LIB libspdk_sock.a 00:03:10.645 SO libspdk_sock.so.8.0 00:03:10.645 SYMLINK libspdk_sock.so 00:03:10.645 CC lib/thread/iobuf.o 00:03:10.645 CC lib/thread/thread.o 00:03:10.904 CC lib/nvme/nvme_ctrlr.o 00:03:10.904 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.904 CC lib/nvme/nvme_fabric.o 00:03:10.904 CC lib/nvme/nvme_ns_cmd.o 00:03:10.904 CC lib/nvme/nvme_ns.o 00:03:10.904 CC lib/nvme/nvme_pcie_common.o 00:03:10.904 CC lib/nvme/nvme_pcie.o 00:03:10.904 CC lib/nvme/nvme_qpair.o 00:03:11.162 CC lib/nvme/nvme.o 00:03:11.734 CC lib/nvme/nvme_quirks.o 00:03:11.734 CC lib/nvme/nvme_transport.o 00:03:11.734 CC lib/nvme/nvme_discovery.o 00:03:11.734 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.734 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.734 CC lib/nvme/nvme_tcp.o 00:03:11.992 CC lib/nvme/nvme_opal.o 00:03:11.992 CC lib/nvme/nvme_io_msg.o 00:03:11.992 CC lib/nvme/nvme_poll_group.o 00:03:12.251 CC lib/nvme/nvme_zns.o 00:03:12.251 CC lib/nvme/nvme_cuse.o 00:03:12.251 CC lib/nvme/nvme_vfio_user.o 00:03:12.251 CC lib/nvme/nvme_rdma.o 00:03:12.251 LIB libspdk_thread.a 00:03:12.510 SO libspdk_thread.so.9.0 00:03:12.510 SYMLINK libspdk_thread.so 00:03:12.510 CC lib/accel/accel.o 00:03:12.510 CC lib/blob/blobstore.o 00:03:12.768 CC lib/init/json_config.o 00:03:12.768 CC lib/init/subsystem.o 00:03:12.768 CC lib/virtio/virtio.o 00:03:12.768 CC lib/virtio/virtio_vhost_user.o 00:03:12.768 CC lib/virtio/virtio_vfio_user.o 00:03:12.768 CC lib/init/subsystem_rpc.o 00:03:13.026 CC lib/init/rpc.o 00:03:13.026 CC lib/accel/accel_rpc.o 00:03:13.026 CC lib/accel/accel_sw.o 00:03:13.026 CC lib/virtio/virtio_pci.o 00:03:13.026 LIB libspdk_init.a 00:03:13.285 CC lib/blob/request.o 00:03:13.285 CC lib/blob/zeroes.o 00:03:13.285 SO libspdk_init.so.4.0 00:03:13.285 CC lib/blob/blob_bs_dev.o 00:03:13.285 SYMLINK libspdk_init.so 00:03:13.285 CC lib/event/app.o 00:03:13.285 CC lib/event/reactor.o 00:03:13.285 CC lib/event/app_rpc.o 00:03:13.285 CC lib/event/log_rpc.o 00:03:13.285 LIB libspdk_virtio.a 00:03:13.544 CC lib/event/scheduler_static.o 00:03:13.544 SO libspdk_virtio.so.6.0 00:03:13.544 SYMLINK libspdk_virtio.so 00:03:13.544 LIB libspdk_nvme.a 00:03:13.803 LIB libspdk_accel.a 00:03:13.803 SO libspdk_accel.so.14.0 00:03:13.803 LIB libspdk_event.a 00:03:13.803 SYMLINK libspdk_accel.so 00:03:13.803 SO libspdk_event.so.12.0 00:03:13.803 SO libspdk_nvme.so.12.0 00:03:14.062 SYMLINK libspdk_event.so 00:03:14.062 CC lib/bdev/bdev.o 00:03:14.062 CC lib/bdev/bdev_rpc.o 00:03:14.062 CC lib/bdev/bdev_zone.o 00:03:14.062 CC lib/bdev/part.o 00:03:14.062 CC lib/bdev/scsi_nvme.o 00:03:14.321 SYMLINK libspdk_nvme.so 00:03:15.698 LIB libspdk_blob.a 00:03:15.698 SO libspdk_blob.so.10.1 00:03:15.957 SYMLINK libspdk_blob.so 00:03:16.216 CC lib/blobfs/tree.o 00:03:16.216 CC lib/blobfs/blobfs.o 00:03:16.216 CC lib/lvol/lvol.o 00:03:16.785 LIB libspdk_bdev.a 00:03:17.045 SO libspdk_bdev.so.14.0 00:03:17.045 LIB libspdk_blobfs.a 00:03:17.045 SYMLINK libspdk_bdev.so 00:03:17.045 SO libspdk_blobfs.so.9.0 00:03:17.045 LIB libspdk_lvol.a 00:03:17.304 SO libspdk_lvol.so.9.1 00:03:17.304 SYMLINK libspdk_blobfs.so 00:03:17.304 CC lib/nvmf/ctrlr.o 00:03:17.304 CC lib/nvmf/ctrlr_discovery.o 00:03:17.304 CC lib/nvmf/subsystem.o 00:03:17.304 CC lib/nvmf/ctrlr_bdev.o 00:03:17.304 CC lib/nvmf/nvmf.o 00:03:17.304 CC lib/ftl/ftl_core.o 00:03:17.304 CC lib/nbd/nbd.o 00:03:17.304 CC lib/scsi/dev.o 00:03:17.304 CC lib/ublk/ublk.o 00:03:17.304 SYMLINK libspdk_lvol.so 00:03:17.304 CC lib/scsi/lun.o 00:03:17.564 CC lib/ftl/ftl_init.o 00:03:17.564 CC lib/scsi/port.o 00:03:17.564 CC lib/scsi/scsi.o 00:03:17.564 CC lib/nbd/nbd_rpc.o 00:03:17.564 CC lib/ftl/ftl_layout.o 00:03:17.823 CC lib/ftl/ftl_debug.o 00:03:17.823 CC lib/scsi/scsi_bdev.o 00:03:17.823 CC lib/scsi/scsi_pr.o 00:03:17.823 LIB libspdk_nbd.a 00:03:17.823 SO libspdk_nbd.so.6.0 00:03:17.823 CC lib/ublk/ublk_rpc.o 00:03:17.823 CC lib/nvmf/nvmf_rpc.o 00:03:18.083 SYMLINK libspdk_nbd.so 00:03:18.083 CC lib/nvmf/transport.o 00:03:18.083 CC lib/ftl/ftl_io.o 00:03:18.083 CC lib/ftl/ftl_sb.o 00:03:18.083 CC lib/scsi/scsi_rpc.o 00:03:18.083 LIB libspdk_ublk.a 00:03:18.083 SO libspdk_ublk.so.2.0 00:03:18.083 CC lib/ftl/ftl_l2p.o 00:03:18.083 SYMLINK libspdk_ublk.so 00:03:18.083 CC lib/nvmf/tcp.o 00:03:18.083 CC lib/nvmf/rdma.o 00:03:18.083 CC lib/scsi/task.o 00:03:18.342 CC lib/ftl/ftl_l2p_flat.o 00:03:18.342 CC lib/ftl/ftl_nv_cache.o 00:03:18.342 CC lib/ftl/ftl_band.o 00:03:18.342 CC lib/ftl/ftl_band_ops.o 00:03:18.342 LIB libspdk_scsi.a 00:03:18.342 CC lib/ftl/ftl_writer.o 00:03:18.601 SO libspdk_scsi.so.8.0 00:03:18.601 SYMLINK libspdk_scsi.so 00:03:18.601 CC lib/ftl/ftl_rq.o 00:03:18.601 CC lib/ftl/ftl_reloc.o 00:03:18.601 CC lib/ftl/ftl_l2p_cache.o 00:03:18.861 CC lib/ftl/ftl_p2l.o 00:03:18.861 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.861 CC lib/iscsi/conn.o 00:03:18.861 CC lib/vhost/vhost.o 00:03:19.120 CC lib/iscsi/init_grp.o 00:03:19.120 CC lib/iscsi/iscsi.o 00:03:19.120 CC lib/iscsi/md5.o 00:03:19.120 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:19.120 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:19.120 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:19.120 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:19.120 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:19.379 CC lib/vhost/vhost_rpc.o 00:03:19.379 CC lib/vhost/vhost_scsi.o 00:03:19.379 CC lib/vhost/vhost_blk.o 00:03:19.379 CC lib/vhost/rte_vhost_user.o 00:03:19.379 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:19.379 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:19.639 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.639 CC lib/iscsi/param.o 00:03:19.639 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.898 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.898 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.898 CC lib/iscsi/portal_grp.o 00:03:19.898 CC lib/iscsi/tgt_node.o 00:03:19.898 CC lib/iscsi/iscsi_subsystem.o 00:03:20.157 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.157 CC lib/iscsi/iscsi_rpc.o 00:03:20.157 CC lib/iscsi/task.o 00:03:20.417 CC lib/ftl/utils/ftl_conf.o 00:03:20.417 CC lib/ftl/utils/ftl_md.o 00:03:20.417 CC lib/ftl/utils/ftl_mempool.o 00:03:20.417 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.417 CC lib/ftl/utils/ftl_property.o 00:03:20.417 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:20.417 LIB libspdk_vhost.a 00:03:20.417 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:20.417 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:20.417 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:20.417 LIB libspdk_nvmf.a 00:03:20.417 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:20.417 SO libspdk_vhost.so.7.1 00:03:20.676 LIB libspdk_iscsi.a 00:03:20.676 SO libspdk_nvmf.so.17.0 00:03:20.676 SO libspdk_iscsi.so.7.0 00:03:20.676 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:20.676 SYMLINK libspdk_vhost.so 00:03:20.676 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.676 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.676 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.676 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.676 CC lib/ftl/base/ftl_base_dev.o 00:03:20.676 CC lib/ftl/base/ftl_base_bdev.o 00:03:20.676 CC lib/ftl/ftl_trace.o 00:03:20.936 SYMLINK libspdk_iscsi.so 00:03:20.936 SYMLINK libspdk_nvmf.so 00:03:20.936 LIB libspdk_ftl.a 00:03:21.195 SO libspdk_ftl.so.8.0 00:03:21.763 SYMLINK libspdk_ftl.so 00:03:22.022 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.022 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.022 CC module/sock/posix/posix.o 00:03:22.022 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.022 CC module/accel/error/accel_error.o 00:03:22.022 CC module/blob/bdev/blob_bdev.o 00:03:22.022 CC module/accel/dsa/accel_dsa.o 00:03:22.022 CC module/accel/ioat/accel_ioat.o 00:03:22.022 CC module/accel/iaa/accel_iaa.o 00:03:22.022 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.022 LIB libspdk_env_dpdk_rpc.a 00:03:22.022 SO libspdk_env_dpdk_rpc.so.5.0 00:03:22.022 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.022 LIB libspdk_scheduler_gscheduler.a 00:03:22.022 SO libspdk_scheduler_gscheduler.so.3.0 00:03:22.022 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:22.022 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.022 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.280 CC module/accel/error/accel_error_rpc.o 00:03:22.280 CC module/accel/iaa/accel_iaa_rpc.o 00:03:22.280 LIB libspdk_scheduler_dynamic.a 00:03:22.280 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.280 SYMLINK libspdk_scheduler_gscheduler.so 00:03:22.280 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:22.280 SO libspdk_scheduler_dynamic.so.3.0 00:03:22.280 LIB libspdk_blob_bdev.a 00:03:22.280 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.280 LIB libspdk_accel_ioat.a 00:03:22.280 SO libspdk_blob_bdev.so.10.1 00:03:22.280 LIB libspdk_accel_iaa.a 00:03:22.280 LIB libspdk_accel_error.a 00:03:22.280 SO libspdk_accel_ioat.so.5.0 00:03:22.280 SO libspdk_accel_iaa.so.2.0 00:03:22.280 LIB libspdk_accel_dsa.a 00:03:22.280 SO libspdk_accel_error.so.1.0 00:03:22.280 SYMLINK libspdk_blob_bdev.so 00:03:22.280 SO libspdk_accel_dsa.so.4.0 00:03:22.280 SYMLINK libspdk_accel_error.so 00:03:22.280 SYMLINK libspdk_accel_iaa.so 00:03:22.280 SYMLINK libspdk_accel_ioat.so 00:03:22.538 SYMLINK libspdk_accel_dsa.so 00:03:22.538 CC module/bdev/delay/vbdev_delay.o 00:03:22.538 CC module/bdev/error/vbdev_error.o 00:03:22.538 CC module/blobfs/bdev/blobfs_bdev.o 00:03:22.538 CC module/bdev/null/bdev_null.o 00:03:22.538 CC module/bdev/nvme/bdev_nvme.o 00:03:22.538 CC module/bdev/lvol/vbdev_lvol.o 00:03:22.538 CC module/bdev/malloc/bdev_malloc.o 00:03:22.538 CC module/bdev/gpt/gpt.o 00:03:22.538 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.796 LIB libspdk_sock_posix.a 00:03:22.796 SO libspdk_sock_posix.so.5.0 00:03:22.796 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:22.796 CC module/bdev/gpt/vbdev_gpt.o 00:03:22.796 SYMLINK libspdk_sock_posix.so 00:03:22.796 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.796 CC module/bdev/error/vbdev_error_rpc.o 00:03:22.796 CC module/bdev/null/bdev_null_rpc.o 00:03:23.054 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.054 LIB libspdk_blobfs_bdev.a 00:03:23.054 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.054 SO libspdk_blobfs_bdev.so.5.0 00:03:23.054 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.054 LIB libspdk_bdev_error.a 00:03:23.054 LIB libspdk_bdev_malloc.a 00:03:23.054 LIB libspdk_bdev_null.a 00:03:23.054 SO libspdk_bdev_error.so.5.0 00:03:23.054 SYMLINK libspdk_blobfs_bdev.so 00:03:23.054 SO libspdk_bdev_malloc.so.5.0 00:03:23.054 SO libspdk_bdev_null.so.5.0 00:03:23.054 LIB libspdk_bdev_passthru.a 00:03:23.054 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.054 CC module/bdev/nvme/nvme_rpc.o 00:03:23.054 LIB libspdk_bdev_gpt.a 00:03:23.054 SYMLINK libspdk_bdev_error.so 00:03:23.054 CC module/bdev/nvme/bdev_mdns_client.o 00:03:23.054 SYMLINK libspdk_bdev_malloc.so 00:03:23.054 SO libspdk_bdev_passthru.so.5.0 00:03:23.054 SO libspdk_bdev_gpt.so.5.0 00:03:23.054 LIB libspdk_bdev_delay.a 00:03:23.054 SYMLINK libspdk_bdev_null.so 00:03:23.054 SO libspdk_bdev_delay.so.5.0 00:03:23.313 SYMLINK libspdk_bdev_passthru.so 00:03:23.313 SYMLINK libspdk_bdev_gpt.so 00:03:23.313 CC module/bdev/raid/bdev_raid.o 00:03:23.313 SYMLINK libspdk_bdev_delay.so 00:03:23.313 CC module/bdev/raid/bdev_raid_rpc.o 00:03:23.313 CC module/bdev/raid/bdev_raid_sb.o 00:03:23.313 CC module/bdev/split/vbdev_split.o 00:03:23.313 CC module/bdev/raid/raid0.o 00:03:23.313 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:23.313 CC module/bdev/xnvme/bdev_xnvme.o 00:03:23.313 LIB libspdk_bdev_lvol.a 00:03:23.313 SO libspdk_bdev_lvol.so.5.0 00:03:23.572 SYMLINK libspdk_bdev_lvol.so 00:03:23.572 CC module/bdev/nvme/vbdev_opal.o 00:03:23.572 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:23.572 CC module/bdev/split/vbdev_split_rpc.o 00:03:23.572 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:23.572 CC module/bdev/aio/bdev_aio.o 00:03:23.572 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:23.572 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:23.572 LIB libspdk_bdev_split.a 00:03:23.572 CC module/bdev/aio/bdev_aio_rpc.o 00:03:23.572 SO libspdk_bdev_split.so.5.0 00:03:23.831 CC module/bdev/raid/raid1.o 00:03:23.831 CC module/bdev/raid/concat.o 00:03:23.831 LIB libspdk_bdev_xnvme.a 00:03:23.831 SYMLINK libspdk_bdev_split.so 00:03:23.831 SO libspdk_bdev_xnvme.so.2.0 00:03:23.831 LIB libspdk_bdev_zone_block.a 00:03:23.831 CC module/bdev/ftl/bdev_ftl.o 00:03:23.831 SYMLINK libspdk_bdev_xnvme.so 00:03:23.831 SO libspdk_bdev_zone_block.so.5.0 00:03:23.831 LIB libspdk_bdev_aio.a 00:03:23.831 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:23.831 CC module/bdev/iscsi/bdev_iscsi.o 00:03:23.831 SO libspdk_bdev_aio.so.5.0 00:03:23.831 SYMLINK libspdk_bdev_zone_block.so 00:03:23.831 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.090 SYMLINK libspdk_bdev_aio.so 00:03:24.090 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.090 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.090 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.090 LIB libspdk_bdev_ftl.a 00:03:24.090 LIB libspdk_bdev_raid.a 00:03:24.090 SO libspdk_bdev_ftl.so.5.0 00:03:24.090 SO libspdk_bdev_raid.so.5.0 00:03:24.349 SYMLINK libspdk_bdev_ftl.so 00:03:24.349 LIB libspdk_bdev_iscsi.a 00:03:24.349 SYMLINK libspdk_bdev_raid.so 00:03:24.349 SO libspdk_bdev_iscsi.so.5.0 00:03:24.349 SYMLINK libspdk_bdev_iscsi.so 00:03:24.608 LIB libspdk_bdev_virtio.a 00:03:24.608 SO libspdk_bdev_virtio.so.5.0 00:03:24.608 SYMLINK libspdk_bdev_virtio.so 00:03:24.867 LIB libspdk_bdev_nvme.a 00:03:24.867 SO libspdk_bdev_nvme.so.6.0 00:03:25.127 SYMLINK libspdk_bdev_nvme.so 00:03:25.695 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:25.695 CC module/event/subsystems/iobuf/iobuf.o 00:03:25.695 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:25.695 CC module/event/subsystems/vmd/vmd.o 00:03:25.695 CC module/event/subsystems/sock/sock.o 00:03:25.695 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:25.695 CC module/event/subsystems/scheduler/scheduler.o 00:03:25.695 LIB libspdk_event_vhost_blk.a 00:03:25.695 LIB libspdk_event_sock.a 00:03:25.695 LIB libspdk_event_vmd.a 00:03:25.695 LIB libspdk_event_scheduler.a 00:03:25.695 LIB libspdk_event_iobuf.a 00:03:25.695 SO libspdk_event_vhost_blk.so.2.0 00:03:25.695 SO libspdk_event_vmd.so.5.0 00:03:25.695 SO libspdk_event_sock.so.4.0 00:03:25.955 SO libspdk_event_scheduler.so.3.0 00:03:25.955 SO libspdk_event_iobuf.so.2.0 00:03:25.955 SYMLINK libspdk_event_sock.so 00:03:25.955 SYMLINK libspdk_event_vhost_blk.so 00:03:25.955 SYMLINK libspdk_event_vmd.so 00:03:25.955 SYMLINK libspdk_event_scheduler.so 00:03:25.955 SYMLINK libspdk_event_iobuf.so 00:03:26.214 CC module/event/subsystems/accel/accel.o 00:03:26.214 LIB libspdk_event_accel.a 00:03:26.474 SO libspdk_event_accel.so.5.0 00:03:26.474 SYMLINK libspdk_event_accel.so 00:03:26.733 CC module/event/subsystems/bdev/bdev.o 00:03:26.993 LIB libspdk_event_bdev.a 00:03:26.993 SO libspdk_event_bdev.so.5.0 00:03:26.993 SYMLINK libspdk_event_bdev.so 00:03:27.252 CC module/event/subsystems/scsi/scsi.o 00:03:27.252 CC module/event/subsystems/ublk/ublk.o 00:03:27.252 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:27.252 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:27.252 CC module/event/subsystems/nbd/nbd.o 00:03:27.512 LIB libspdk_event_ublk.a 00:03:27.512 LIB libspdk_event_nbd.a 00:03:27.512 LIB libspdk_event_scsi.a 00:03:27.512 SO libspdk_event_scsi.so.5.0 00:03:27.512 SO libspdk_event_ublk.so.2.0 00:03:27.512 SO libspdk_event_nbd.so.5.0 00:03:27.512 SYMLINK libspdk_event_ublk.so 00:03:27.512 SYMLINK libspdk_event_scsi.so 00:03:27.512 LIB libspdk_event_nvmf.a 00:03:27.512 SYMLINK libspdk_event_nbd.so 00:03:27.512 SO libspdk_event_nvmf.so.5.0 00:03:27.770 SYMLINK libspdk_event_nvmf.so 00:03:27.770 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:27.770 CC module/event/subsystems/iscsi/iscsi.o 00:03:28.029 LIB libspdk_event_vhost_scsi.a 00:03:28.029 LIB libspdk_event_iscsi.a 00:03:28.029 SO libspdk_event_vhost_scsi.so.2.0 00:03:28.029 SO libspdk_event_iscsi.so.5.0 00:03:28.029 SYMLINK libspdk_event_vhost_scsi.so 00:03:28.029 SYMLINK libspdk_event_iscsi.so 00:03:28.288 SO libspdk.so.5.0 00:03:28.288 SYMLINK libspdk.so 00:03:28.547 CXX app/trace/trace.o 00:03:28.547 CC examples/vmd/lsvmd/lsvmd.o 00:03:28.547 CC examples/accel/perf/accel_perf.o 00:03:28.547 CC examples/sock/hello_world/hello_sock.o 00:03:28.547 CC examples/nvme/hello_world/hello_world.o 00:03:28.547 CC examples/ioat/perf/perf.o 00:03:28.547 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.547 CC examples/nvmf/nvmf/nvmf.o 00:03:28.547 CC examples/blob/hello_world/hello_blob.o 00:03:28.547 CC test/accel/dif/dif.o 00:03:28.547 LINK lsvmd 00:03:28.806 LINK hello_world 00:03:28.806 LINK hello_bdev 00:03:28.806 LINK ioat_perf 00:03:28.806 LINK hello_sock 00:03:28.806 LINK hello_blob 00:03:28.806 LINK spdk_trace 00:03:28.806 LINK nvmf 00:03:28.806 CC examples/vmd/led/led.o 00:03:28.806 LINK dif 00:03:28.806 CC examples/ioat/verify/verify.o 00:03:29.065 CC examples/nvme/reconnect/reconnect.o 00:03:29.065 LINK accel_perf 00:03:29.065 CC examples/bdev/bdevperf/bdevperf.o 00:03:29.065 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:29.065 LINK led 00:03:29.065 CC examples/blob/cli/blobcli.o 00:03:29.065 CC app/trace_record/trace_record.o 00:03:29.065 CC examples/util/zipf/zipf.o 00:03:29.065 LINK verify 00:03:29.324 LINK zipf 00:03:29.324 CC test/app/bdev_svc/bdev_svc.o 00:03:29.324 CC test/bdev/bdevio/bdevio.o 00:03:29.324 LINK reconnect 00:03:29.324 LINK spdk_trace_record 00:03:29.324 CC test/blobfs/mkfs/mkfs.o 00:03:29.324 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:29.324 LINK bdev_svc 00:03:29.583 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.583 LINK mkfs 00:03:29.583 LINK nvme_manage 00:03:29.583 CC app/nvmf_tgt/nvmf_main.o 00:03:29.583 CC app/iscsi_tgt/iscsi_tgt.o 00:03:29.583 LINK blobcli 00:03:29.583 LINK bdevio 00:03:29.583 LINK nvmf_tgt 00:03:29.583 TEST_HEADER include/spdk/accel.h 00:03:29.583 TEST_HEADER include/spdk/accel_module.h 00:03:29.583 CC examples/nvme/arbitration/arbitration.o 00:03:29.583 TEST_HEADER include/spdk/assert.h 00:03:29.583 TEST_HEADER include/spdk/barrier.h 00:03:29.583 TEST_HEADER include/spdk/base64.h 00:03:29.842 CC examples/nvme/hotplug/hotplug.o 00:03:29.842 TEST_HEADER include/spdk/bdev.h 00:03:29.842 TEST_HEADER include/spdk/bdev_module.h 00:03:29.842 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.842 TEST_HEADER include/spdk/bit_array.h 00:03:29.842 TEST_HEADER include/spdk/bit_pool.h 00:03:29.842 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.842 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.842 TEST_HEADER include/spdk/blobfs.h 00:03:29.842 TEST_HEADER include/spdk/blob.h 00:03:29.842 TEST_HEADER include/spdk/conf.h 00:03:29.842 TEST_HEADER include/spdk/config.h 00:03:29.842 TEST_HEADER include/spdk/cpuset.h 00:03:29.842 TEST_HEADER include/spdk/crc16.h 00:03:29.842 TEST_HEADER include/spdk/crc32.h 00:03:29.842 TEST_HEADER include/spdk/crc64.h 00:03:29.842 TEST_HEADER include/spdk/dif.h 00:03:29.842 LINK iscsi_tgt 00:03:29.842 TEST_HEADER include/spdk/dma.h 00:03:29.842 TEST_HEADER include/spdk/endian.h 00:03:29.842 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.842 TEST_HEADER include/spdk/env.h 00:03:29.842 LINK bdevperf 00:03:29.842 TEST_HEADER include/spdk/event.h 00:03:29.842 TEST_HEADER include/spdk/fd_group.h 00:03:29.842 TEST_HEADER include/spdk/fd.h 00:03:29.843 TEST_HEADER include/spdk/file.h 00:03:29.843 TEST_HEADER include/spdk/ftl.h 00:03:29.843 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.843 TEST_HEADER include/spdk/hexlify.h 00:03:29.843 TEST_HEADER include/spdk/histogram_data.h 00:03:29.843 TEST_HEADER include/spdk/idxd.h 00:03:29.843 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.843 TEST_HEADER include/spdk/init.h 00:03:29.843 TEST_HEADER include/spdk/ioat.h 00:03:29.843 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.843 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.843 TEST_HEADER include/spdk/json.h 00:03:29.843 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.843 TEST_HEADER include/spdk/likely.h 00:03:29.843 TEST_HEADER include/spdk/log.h 00:03:29.843 TEST_HEADER include/spdk/lvol.h 00:03:29.843 TEST_HEADER include/spdk/memory.h 00:03:29.843 TEST_HEADER include/spdk/mmio.h 00:03:29.843 TEST_HEADER include/spdk/nbd.h 00:03:29.843 TEST_HEADER include/spdk/notify.h 00:03:29.843 TEST_HEADER include/spdk/nvme.h 00:03:29.843 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.843 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.843 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.843 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.843 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.843 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.843 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.843 TEST_HEADER include/spdk/nvmf.h 00:03:29.843 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.843 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.843 TEST_HEADER include/spdk/opal.h 00:03:29.843 TEST_HEADER include/spdk/opal_spec.h 00:03:29.843 TEST_HEADER include/spdk/pci_ids.h 00:03:29.843 TEST_HEADER include/spdk/pipe.h 00:03:29.843 TEST_HEADER include/spdk/queue.h 00:03:29.843 TEST_HEADER include/spdk/reduce.h 00:03:29.843 TEST_HEADER include/spdk/rpc.h 00:03:29.843 TEST_HEADER include/spdk/scheduler.h 00:03:29.843 LINK nvme_fuzz 00:03:29.843 TEST_HEADER include/spdk/scsi.h 00:03:29.843 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.843 TEST_HEADER include/spdk/sock.h 00:03:29.843 TEST_HEADER include/spdk/stdinc.h 00:03:29.843 TEST_HEADER include/spdk/string.h 00:03:29.843 TEST_HEADER include/spdk/thread.h 00:03:29.843 TEST_HEADER include/spdk/trace.h 00:03:29.843 TEST_HEADER include/spdk/trace_parser.h 00:03:29.843 TEST_HEADER include/spdk/tree.h 00:03:29.843 CC app/spdk_tgt/spdk_tgt.o 00:03:29.843 TEST_HEADER include/spdk/ublk.h 00:03:29.843 TEST_HEADER include/spdk/util.h 00:03:29.843 TEST_HEADER include/spdk/uuid.h 00:03:29.843 TEST_HEADER include/spdk/version.h 00:03:29.843 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.843 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.843 TEST_HEADER include/spdk/vhost.h 00:03:29.843 TEST_HEADER include/spdk/vmd.h 00:03:29.843 TEST_HEADER include/spdk/xor.h 00:03:29.843 TEST_HEADER include/spdk/zipf.h 00:03:29.843 CXX test/cpp_headers/accel.o 00:03:29.843 CC app/spdk_lspci/spdk_lspci.o 00:03:29.843 CXX test/cpp_headers/accel_module.o 00:03:29.843 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:29.843 LINK hotplug 00:03:30.102 CXX test/cpp_headers/assert.o 00:03:30.102 CC test/app/histogram_perf/histogram_perf.o 00:03:30.102 LINK spdk_lspci 00:03:30.102 LINK arbitration 00:03:30.102 LINK spdk_tgt 00:03:30.102 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.102 CC test/app/jsoncat/jsoncat.o 00:03:30.102 CC test/app/stub/stub.o 00:03:30.102 CXX test/cpp_headers/barrier.o 00:03:30.102 CXX test/cpp_headers/base64.o 00:03:30.102 LINK histogram_perf 00:03:30.102 CXX test/cpp_headers/bdev.o 00:03:30.361 LINK jsoncat 00:03:30.361 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.361 CC app/spdk_nvme_perf/perf.o 00:03:30.361 LINK stub 00:03:30.361 CC app/spdk_nvme_identify/identify.o 00:03:30.361 CC app/spdk_nvme_discover/discovery_aer.o 00:03:30.361 CXX test/cpp_headers/bdev_module.o 00:03:30.361 CC app/spdk_top/spdk_top.o 00:03:30.361 LINK cmb_copy 00:03:30.361 CC test/dma/test_dma/test_dma.o 00:03:30.361 CXX test/cpp_headers/bdev_zone.o 00:03:30.361 LINK vhost_fuzz 00:03:30.629 LINK spdk_nvme_discover 00:03:30.629 CC examples/nvme/abort/abort.o 00:03:30.629 CXX test/cpp_headers/bit_array.o 00:03:30.629 CC test/env/vtophys/vtophys.o 00:03:30.629 CC test/env/mem_callbacks/mem_callbacks.o 00:03:30.629 CXX test/cpp_headers/bit_pool.o 00:03:30.906 LINK test_dma 00:03:30.906 LINK vtophys 00:03:30.906 CXX test/cpp_headers/blob_bdev.o 00:03:30.906 CC test/event/event_perf/event_perf.o 00:03:30.906 CXX test/cpp_headers/blobfs_bdev.o 00:03:30.906 CXX test/cpp_headers/blobfs.o 00:03:30.906 LINK abort 00:03:31.164 LINK event_perf 00:03:31.164 CXX test/cpp_headers/blob.o 00:03:31.164 LINK spdk_nvme_perf 00:03:31.164 LINK spdk_nvme_identify 00:03:31.164 LINK mem_callbacks 00:03:31.164 CC test/lvol/esnap/esnap.o 00:03:31.164 CC test/event/reactor/reactor.o 00:03:31.164 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.164 LINK iscsi_fuzz 00:03:31.164 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:31.164 LINK spdk_top 00:03:31.423 CXX test/cpp_headers/conf.o 00:03:31.423 LINK reactor 00:03:31.423 CC test/rpc_client/rpc_client_test.o 00:03:31.423 LINK env_dpdk_post_init 00:03:31.423 CC test/nvme/aer/aer.o 00:03:31.423 LINK pmr_persistence 00:03:31.423 CC test/thread/poller_perf/poller_perf.o 00:03:31.423 CXX test/cpp_headers/config.o 00:03:31.423 CXX test/cpp_headers/cpuset.o 00:03:31.423 CC test/event/reactor_perf/reactor_perf.o 00:03:31.680 CC app/vhost/vhost.o 00:03:31.680 LINK rpc_client_test 00:03:31.680 LINK poller_perf 00:03:31.680 CC test/event/app_repeat/app_repeat.o 00:03:31.680 CC test/env/memory/memory_ut.o 00:03:31.680 CXX test/cpp_headers/crc16.o 00:03:31.680 LINK reactor_perf 00:03:31.680 LINK aer 00:03:31.680 CC examples/thread/thread/thread_ex.o 00:03:31.680 LINK vhost 00:03:31.680 LINK app_repeat 00:03:31.680 CC test/nvme/reset/reset.o 00:03:31.680 CC test/nvme/sgl/sgl.o 00:03:31.680 CXX test/cpp_headers/crc32.o 00:03:31.938 CC test/nvme/e2edp/nvme_dp.o 00:03:31.938 CXX test/cpp_headers/crc64.o 00:03:31.938 CXX test/cpp_headers/dif.o 00:03:31.938 LINK thread 00:03:31.938 CC app/spdk_dd/spdk_dd.o 00:03:31.938 CC test/event/scheduler/scheduler.o 00:03:31.938 LINK reset 00:03:31.938 LINK sgl 00:03:32.196 CXX test/cpp_headers/dma.o 00:03:32.196 LINK nvme_dp 00:03:32.196 LINK scheduler 00:03:32.196 CC app/fio/nvme/fio_plugin.o 00:03:32.196 CXX test/cpp_headers/endian.o 00:03:32.196 CC test/nvme/overhead/overhead.o 00:03:32.196 CC examples/idxd/perf/perf.o 00:03:32.196 CC test/nvme/err_injection/err_injection.o 00:03:32.196 LINK spdk_dd 00:03:32.196 CXX test/cpp_headers/env_dpdk.o 00:03:32.454 LINK err_injection 00:03:32.454 LINK memory_ut 00:03:32.454 CC app/fio/bdev/fio_plugin.o 00:03:32.454 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.454 CXX test/cpp_headers/env.o 00:03:32.454 LINK overhead 00:03:32.454 CC test/nvme/startup/startup.o 00:03:32.713 LINK idxd_perf 00:03:32.713 CXX test/cpp_headers/event.o 00:03:32.713 LINK interrupt_tgt 00:03:32.713 CXX test/cpp_headers/fd_group.o 00:03:32.713 CC test/nvme/reserve/reserve.o 00:03:32.713 CXX test/cpp_headers/fd.o 00:03:32.713 LINK startup 00:03:32.713 CC test/env/pci/pci_ut.o 00:03:32.713 LINK spdk_nvme 00:03:32.713 CXX test/cpp_headers/file.o 00:03:32.972 CXX test/cpp_headers/ftl.o 00:03:32.972 CXX test/cpp_headers/gpt_spec.o 00:03:32.972 CC test/nvme/simple_copy/simple_copy.o 00:03:32.972 LINK reserve 00:03:32.972 CC test/nvme/connect_stress/connect_stress.o 00:03:32.972 CC test/nvme/boot_partition/boot_partition.o 00:03:32.972 LINK spdk_bdev 00:03:32.972 CC test/nvme/compliance/nvme_compliance.o 00:03:32.972 CXX test/cpp_headers/hexlify.o 00:03:32.972 CXX test/cpp_headers/histogram_data.o 00:03:32.972 CC test/nvme/fused_ordering/fused_ordering.o 00:03:33.231 LINK boot_partition 00:03:33.231 LINK simple_copy 00:03:33.231 LINK pci_ut 00:03:33.231 LINK connect_stress 00:03:33.231 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:33.231 CXX test/cpp_headers/idxd.o 00:03:33.231 CC test/nvme/fdp/fdp.o 00:03:33.231 LINK fused_ordering 00:03:33.231 CXX test/cpp_headers/idxd_spec.o 00:03:33.231 CXX test/cpp_headers/init.o 00:03:33.231 CC test/nvme/cuse/cuse.o 00:03:33.231 LINK doorbell_aers 00:03:33.489 CXX test/cpp_headers/ioat.o 00:03:33.489 CXX test/cpp_headers/ioat_spec.o 00:03:33.489 LINK nvme_compliance 00:03:33.489 CXX test/cpp_headers/iscsi_spec.o 00:03:33.489 CXX test/cpp_headers/json.o 00:03:33.489 CXX test/cpp_headers/jsonrpc.o 00:03:33.489 CXX test/cpp_headers/likely.o 00:03:33.489 CXX test/cpp_headers/log.o 00:03:33.489 CXX test/cpp_headers/lvol.o 00:03:33.489 CXX test/cpp_headers/memory.o 00:03:33.489 LINK fdp 00:03:33.489 CXX test/cpp_headers/mmio.o 00:03:33.489 CXX test/cpp_headers/nbd.o 00:03:33.489 CXX test/cpp_headers/notify.o 00:03:33.489 CXX test/cpp_headers/nvme.o 00:03:33.748 CXX test/cpp_headers/nvme_intel.o 00:03:33.748 CXX test/cpp_headers/nvme_ocssd.o 00:03:33.748 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:33.748 CXX test/cpp_headers/nvme_spec.o 00:03:33.748 CXX test/cpp_headers/nvme_zns.o 00:03:33.748 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.748 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.748 CXX test/cpp_headers/nvmf.o 00:03:33.748 CXX test/cpp_headers/nvmf_spec.o 00:03:33.748 CXX test/cpp_headers/nvmf_transport.o 00:03:33.748 CXX test/cpp_headers/opal.o 00:03:33.748 CXX test/cpp_headers/opal_spec.o 00:03:33.748 CXX test/cpp_headers/pci_ids.o 00:03:34.006 CXX test/cpp_headers/pipe.o 00:03:34.006 CXX test/cpp_headers/queue.o 00:03:34.006 CXX test/cpp_headers/reduce.o 00:03:34.006 CXX test/cpp_headers/rpc.o 00:03:34.006 CXX test/cpp_headers/scheduler.o 00:03:34.006 CXX test/cpp_headers/scsi.o 00:03:34.006 CXX test/cpp_headers/scsi_spec.o 00:03:34.006 CXX test/cpp_headers/sock.o 00:03:34.006 CXX test/cpp_headers/stdinc.o 00:03:34.006 CXX test/cpp_headers/string.o 00:03:34.006 CXX test/cpp_headers/thread.o 00:03:34.006 CXX test/cpp_headers/trace.o 00:03:34.006 CXX test/cpp_headers/trace_parser.o 00:03:34.006 CXX test/cpp_headers/tree.o 00:03:34.006 CXX test/cpp_headers/ublk.o 00:03:34.265 CXX test/cpp_headers/util.o 00:03:34.265 CXX test/cpp_headers/uuid.o 00:03:34.265 CXX test/cpp_headers/version.o 00:03:34.265 CXX test/cpp_headers/vfio_user_pci.o 00:03:34.265 CXX test/cpp_headers/vfio_user_spec.o 00:03:34.265 CXX test/cpp_headers/vhost.o 00:03:34.265 CXX test/cpp_headers/vmd.o 00:03:34.265 LINK cuse 00:03:34.265 CXX test/cpp_headers/xor.o 00:03:34.265 CXX test/cpp_headers/zipf.o 00:03:36.167 LINK esnap 00:03:36.426 00:03:36.426 real 1m1.327s 00:03:36.426 user 5m30.549s 00:03:36.426 sys 1m44.030s 00:03:36.685 23:11:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:36.685 23:11:28 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.685 ************************************ 00:03:36.685 END TEST make 00:03:36.685 ************************************ 00:03:36.685 23:11:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:36.685 23:11:28 -- nvmf/common.sh@7 -- # uname -s 00:03:36.685 23:11:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:36.685 23:11:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:36.685 23:11:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:36.685 23:11:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:36.685 23:11:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:36.685 23:11:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:36.685 23:11:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:36.685 23:11:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:36.685 23:11:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:36.685 23:11:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:36.685 23:11:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8183e38f-c15b-4fba-ab96-70becb9a62cd 00:03:36.685 23:11:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=8183e38f-c15b-4fba-ab96-70becb9a62cd 00:03:36.685 23:11:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:36.685 23:11:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:36.685 23:11:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:36.685 23:11:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:36.685 23:11:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:36.685 23:11:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.685 23:11:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.685 23:11:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.685 23:11:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.685 23:11:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.685 23:11:28 -- paths/export.sh@5 -- # export PATH 00:03:36.685 23:11:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.685 23:11:28 -- nvmf/common.sh@46 -- # : 0 00:03:36.685 23:11:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:36.685 23:11:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:36.685 23:11:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:36.685 23:11:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:36.685 23:11:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:36.685 23:11:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:36.685 23:11:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:36.685 23:11:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:36.685 23:11:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:36.685 23:11:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:36.685 23:11:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:36.685 23:11:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:36.685 23:11:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:36.685 23:11:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:36.685 23:11:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:36.685 23:11:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:36.945 23:11:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:36.945 23:11:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:36.945 23:11:28 -- spdk/autotest.sh@48 -- # udevadm_pid=48265 00:03:36.945 23:11:28 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:36.945 23:11:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:36.945 23:11:28 -- spdk/autotest.sh@54 -- # echo 48290 00:03:36.945 23:11:28 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:36.945 23:11:28 -- spdk/autotest.sh@56 -- # echo 48295 00:03:36.945 23:11:28 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:36.945 23:11:28 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:36.945 23:11:28 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.945 23:11:28 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:36.945 23:11:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:36.945 23:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:36.945 23:11:28 -- spdk/autotest.sh@70 -- # create_test_list 00:03:36.945 23:11:28 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:36.945 23:11:28 -- common/autotest_common.sh@10 -- # set +x 00:03:36.945 23:11:28 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:36.945 23:11:28 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:36.945 23:11:28 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:36.945 23:11:28 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:36.945 23:11:28 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:36.945 23:11:28 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:36.945 23:11:28 -- common/autotest_common.sh@1440 -- # uname 00:03:36.945 23:11:28 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:36.945 23:11:28 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:36.945 23:11:28 -- common/autotest_common.sh@1460 -- # uname 00:03:36.945 23:11:28 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:36.945 23:11:28 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:36.945 23:11:28 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:36.945 23:11:28 -- spdk/autotest.sh@83 -- # hash lcov 00:03:36.945 23:11:28 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:36.945 23:11:28 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:36.945 --rc lcov_branch_coverage=1 00:03:36.945 --rc lcov_function_coverage=1 00:03:36.945 --rc genhtml_branch_coverage=1 00:03:36.945 --rc genhtml_function_coverage=1 00:03:36.945 --rc genhtml_legend=1 00:03:36.945 --rc geninfo_all_blocks=1 00:03:36.945 ' 00:03:36.945 23:11:28 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:36.945 --rc lcov_branch_coverage=1 00:03:36.945 --rc lcov_function_coverage=1 00:03:36.945 --rc genhtml_branch_coverage=1 00:03:36.945 --rc genhtml_function_coverage=1 00:03:36.945 --rc genhtml_legend=1 00:03:36.945 --rc geninfo_all_blocks=1 00:03:36.945 ' 00:03:36.945 23:11:28 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:36.945 --rc lcov_branch_coverage=1 00:03:36.945 --rc lcov_function_coverage=1 00:03:36.945 --rc genhtml_branch_coverage=1 00:03:36.945 --rc genhtml_function_coverage=1 00:03:36.945 --rc genhtml_legend=1 00:03:36.945 --rc geninfo_all_blocks=1 00:03:36.945 --no-external' 00:03:36.945 23:11:28 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:36.945 --rc lcov_branch_coverage=1 00:03:36.945 --rc lcov_function_coverage=1 00:03:36.945 --rc genhtml_branch_coverage=1 00:03:36.945 --rc genhtml_function_coverage=1 00:03:36.945 --rc genhtml_legend=1 00:03:36.945 --rc geninfo_all_blocks=1 00:03:36.945 --no-external' 00:03:36.945 23:11:28 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:36.945 lcov: LCOV version 1.14 00:03:36.945 23:11:28 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.119 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:45.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:45.119 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:45.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:45.119 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:45.119 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:03.220 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:03.220 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:03.221 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:03.221 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:05.126 23:11:56 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:05.126 23:11:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:05.126 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.126 23:11:56 -- spdk/autotest.sh@102 -- # rm -f 00:04:05.126 23:11:56 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.766 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:06.766 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:06.766 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:06.766 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:06.766 23:11:58 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:06.766 23:11:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:06.766 23:11:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:06.766 23:11:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2c2n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme2c2n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:06.766 23:11:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:06.766 23:11:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:06.766 23:11:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:06.766 23:11:58 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:06.766 23:11:58 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 /dev/nvme2n1 /dev/nvme3n1 00:04:06.766 23:11:58 -- spdk/autotest.sh@121 -- # grep -v p 00:04:06.766 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:06.766 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:06.766 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:06.766 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:06.766 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:06.766 No valid GPT data, bailing 00:04:06.766 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:06.766 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:06.766 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:06.766 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:06.766 1+0 records in 00:04:06.766 1+0 records out 00:04:06.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692612 s, 151 MB/s 00:04:06.767 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:06.767 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:06.767 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:06.767 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:06.767 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:07.027 No valid GPT data, bailing 00:04:07.027 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:07.027 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:07.027 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:07.027 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:07.027 1+0 records in 00:04:07.027 1+0 records out 00:04:07.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675328 s, 155 MB/s 00:04:07.027 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.027 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:07.027 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:04:07.027 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:07.027 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:07.027 No valid GPT data, bailing 00:04:07.027 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:07.027 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:07.027 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:07.027 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:07.027 1+0 records in 00:04:07.027 1+0 records out 00:04:07.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00636518 s, 165 MB/s 00:04:07.027 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.027 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:07.027 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:04:07.027 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:07.027 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:07.027 No valid GPT data, bailing 00:04:07.027 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:07.286 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:07.286 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:07.286 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:07.286 1+0 records in 00:04:07.286 1+0 records out 00:04:07.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396187 s, 265 MB/s 00:04:07.287 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.287 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:07.287 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:04:07.287 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:07.287 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:07.287 No valid GPT data, bailing 00:04:07.287 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:07.287 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:07.287 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:07.287 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:07.287 1+0 records in 00:04:07.287 1+0 records out 00:04:07.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654157 s, 160 MB/s 00:04:07.287 23:11:58 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.287 23:11:58 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:07.287 23:11:58 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:04:07.287 23:11:58 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:07.287 23:11:58 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:07.287 No valid GPT data, bailing 00:04:07.287 23:11:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:07.287 23:11:58 -- scripts/common.sh@393 -- # pt= 00:04:07.287 23:11:58 -- scripts/common.sh@394 -- # return 1 00:04:07.287 23:11:58 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:07.287 1+0 records in 00:04:07.287 1+0 records out 00:04:07.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157694 s, 66.5 MB/s 00:04:07.287 23:11:58 -- spdk/autotest.sh@129 -- # sync 00:04:07.287 23:11:58 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.287 23:11:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.287 23:11:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.585 23:12:01 -- spdk/autotest.sh@135 -- # uname -s 00:04:10.585 23:12:01 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:10.585 23:12:01 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.585 23:12:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.585 23:12:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.585 23:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.585 ************************************ 00:04:10.585 START TEST setup.sh 00:04:10.585 ************************************ 00:04:10.585 23:12:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.585 * Looking for test storage... 00:04:10.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.585 23:12:01 -- setup/test-setup.sh@10 -- # uname -s 00:04:10.585 23:12:01 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:10.585 23:12:01 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.585 23:12:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.585 23:12:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.585 23:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:10.585 ************************************ 00:04:10.585 START TEST acl 00:04:10.585 ************************************ 00:04:10.585 23:12:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.585 * Looking for test storage... 00:04:10.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.585 23:12:02 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:10.585 23:12:02 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:10.585 23:12:02 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:10.585 23:12:02 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2c2n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme2c2n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:10.585 23:12:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:10.585 23:12:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:10.585 23:12:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:10.585 23:12:02 -- setup/acl.sh@12 -- # devs=() 00:04:10.585 23:12:02 -- setup/acl.sh@12 -- # declare -a devs 00:04:10.585 23:12:02 -- setup/acl.sh@13 -- # drivers=() 00:04:10.585 23:12:02 -- setup/acl.sh@13 -- # declare -A drivers 00:04:10.585 23:12:02 -- setup/acl.sh@51 -- # setup reset 00:04:10.585 23:12:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.585 23:12:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.964 23:12:03 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:11.964 23:12:03 -- setup/acl.sh@16 -- # local dev driver 00:04:11.964 23:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.964 23:12:03 -- setup/acl.sh@15 -- # setup output status 00:04:11.964 23:12:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.964 23:12:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:12.223 Hugepages 00:04:12.223 node hugesize free / total 00:04:12.223 23:12:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:12.223 23:12:03 -- setup/acl.sh@19 -- # continue 00:04:12.223 23:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.223 00:04:12.223 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.223 23:12:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:12.223 23:12:03 -- setup/acl.sh@19 -- # continue 00:04:12.223 23:12:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.482 23:12:04 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:12.482 23:12:04 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:12.482 23:12:04 -- setup/acl.sh@20 -- # continue 00:04:12.482 23:12:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.482 23:12:04 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:12.482 23:12:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.482 23:12:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:12.482 23:12:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.482 23:12:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.482 23:12:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.742 23:12:04 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:12.742 23:12:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.742 23:12:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:12.742 23:12:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.742 23:12:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.742 23:12:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.742 23:12:04 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:12.742 23:12:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:12.742 23:12:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:12.742 23:12:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:12.742 23:12:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:12.742 23:12:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.002 23:12:04 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:13.002 23:12:04 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:13.002 23:12:04 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:13.002 23:12:04 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:13.002 23:12:04 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:13.002 23:12:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.002 23:12:04 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:13.002 23:12:04 -- setup/acl.sh@54 -- # run_test denied denied 00:04:13.002 23:12:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.002 23:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.002 23:12:04 -- common/autotest_common.sh@10 -- # set +x 00:04:13.002 ************************************ 00:04:13.002 START TEST denied 00:04:13.002 ************************************ 00:04:13.002 23:12:04 -- common/autotest_common.sh@1104 -- # denied 00:04:13.002 23:12:04 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:13.002 23:12:04 -- setup/acl.sh@38 -- # setup output config 00:04:13.002 23:12:04 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:13.002 23:12:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.002 23:12:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.910 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:14.910 23:12:06 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:14.910 23:12:06 -- setup/acl.sh@28 -- # local dev driver 00:04:14.910 23:12:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.910 23:12:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:14.910 23:12:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:14.910 23:12:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.910 23:12:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.910 23:12:06 -- setup/acl.sh@41 -- # setup reset 00:04:14.910 23:12:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.910 23:12:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.481 ************************************ 00:04:21.481 END TEST denied 00:04:21.481 ************************************ 00:04:21.481 00:04:21.481 real 0m8.037s 00:04:21.481 user 0m1.043s 00:04:21.481 sys 0m2.167s 00:04:21.481 23:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.481 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:04:21.481 23:12:12 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:21.481 23:12:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.481 23:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.481 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:04:21.481 ************************************ 00:04:21.481 START TEST allowed 00:04:21.481 ************************************ 00:04:21.481 23:12:12 -- common/autotest_common.sh@1104 -- # allowed 00:04:21.481 23:12:12 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:21.481 23:12:12 -- setup/acl.sh@45 -- # setup output config 00:04:21.481 23:12:12 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:21.481 23:12:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.481 23:12:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.428 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.428 23:12:14 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:22.428 23:12:14 -- setup/acl.sh@28 -- # local dev driver 00:04:22.428 23:12:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.428 23:12:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.428 23:12:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.428 23:12:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.428 23:12:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.428 23:12:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.428 23:12:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.428 23:12:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:22.428 23:12:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.428 23:12:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.428 23:12:14 -- setup/acl.sh@48 -- # setup reset 00:04:22.428 23:12:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.428 23:12:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.417 00:04:24.417 real 0m3.003s 00:04:24.417 user 0m1.166s 00:04:24.417 sys 0m1.862s 00:04:24.417 23:12:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.417 ************************************ 00:04:24.417 END TEST allowed 00:04:24.417 ************************************ 00:04:24.417 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.417 ************************************ 00:04:24.417 END TEST acl 00:04:24.417 ************************************ 00:04:24.417 00:04:24.417 real 0m13.766s 00:04:24.417 user 0m3.235s 00:04:24.417 sys 0m5.778s 00:04:24.417 23:12:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.417 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.417 23:12:15 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:24.417 23:12:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.417 23:12:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.417 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.417 ************************************ 00:04:24.417 START TEST hugepages 00:04:24.417 ************************************ 00:04:24.417 23:12:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:24.417 * Looking for test storage... 00:04:24.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:24.417 23:12:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:24.417 23:12:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:24.417 23:12:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:24.417 23:12:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:24.417 23:12:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:24.417 23:12:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:24.417 23:12:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:24.417 23:12:15 -- setup/common.sh@18 -- # local node= 00:04:24.417 23:12:15 -- setup/common.sh@19 -- # local var val 00:04:24.417 23:12:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.417 23:12:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.417 23:12:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.417 23:12:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.417 23:12:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.417 23:12:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5863496 kB' 'MemAvailable: 7403328 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 449288 kB' 'Inactive: 1413680 kB' 'Active(anon): 117304 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 108668 kB' 'Mapped: 48680 kB' 'Shmem: 10512 kB' 'KReclaimable: 62664 kB' 'Slab: 141000 kB' 'SReclaimable: 62664 kB' 'SUnreclaim: 78336 kB' 'KernelStack: 6348 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.417 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.417 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.418 23:12:15 -- setup/common.sh@32 -- # continue 00:04:24.418 23:12:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.419 23:12:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.419 23:12:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:24.419 23:12:15 -- setup/common.sh@33 -- # echo 2048 00:04:24.419 23:12:15 -- setup/common.sh@33 -- # return 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:24.419 23:12:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:24.419 23:12:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:24.419 23:12:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:24.419 23:12:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:24.419 23:12:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:24.419 23:12:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:24.419 23:12:15 -- setup/hugepages.sh@207 -- # get_nodes 00:04:24.419 23:12:15 -- setup/hugepages.sh@27 -- # local node 00:04:24.419 23:12:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.419 23:12:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:24.419 23:12:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.419 23:12:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.419 23:12:15 -- setup/hugepages.sh@208 -- # clear_hp 00:04:24.419 23:12:15 -- setup/hugepages.sh@37 -- # local node hp 00:04:24.419 23:12:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.419 23:12:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.419 23:12:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.419 23:12:15 -- setup/hugepages.sh@41 -- # echo 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.419 23:12:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.419 23:12:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:24.419 23:12:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.419 23:12:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.419 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.419 ************************************ 00:04:24.419 START TEST default_setup 00:04:24.419 ************************************ 00:04:24.419 23:12:15 -- common/autotest_common.sh@1104 -- # default_setup 00:04:24.419 23:12:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.419 23:12:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.419 23:12:15 -- setup/hugepages.sh@51 -- # shift 00:04:24.419 23:12:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.419 23:12:15 -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.419 23:12:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.419 23:12:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.419 23:12:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.419 23:12:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.419 23:12:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.419 23:12:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.419 23:12:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.419 23:12:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.419 23:12:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.419 23:12:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.419 23:12:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:24.419 23:12:15 -- setup/hugepages.sh@73 -- # return 0 00:04:24.419 23:12:15 -- setup/hugepages.sh@137 -- # setup output 00:04:24.419 23:12:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.419 23:12:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.797 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.797 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.797 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.797 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.059 23:12:17 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.059 23:12:17 -- setup/hugepages.sh@89 -- # local node 00:04:26.059 23:12:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.059 23:12:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.059 23:12:17 -- setup/hugepages.sh@92 -- # local surp 00:04:26.059 23:12:17 -- setup/hugepages.sh@93 -- # local resv 00:04:26.059 23:12:17 -- setup/hugepages.sh@94 -- # local anon 00:04:26.059 23:12:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.059 23:12:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.059 23:12:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.059 23:12:17 -- setup/common.sh@18 -- # local node= 00:04:26.059 23:12:17 -- setup/common.sh@19 -- # local var val 00:04:26.059 23:12:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.059 23:12:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.059 23:12:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.059 23:12:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.059 23:12:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.059 23:12:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.059 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.059 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7984504 kB' 'MemAvailable: 9524124 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 1413704 kB' 'Active(anon): 129828 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413704 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121196 kB' 'Mapped: 48600 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139972 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77784 kB' 'KernelStack: 6384 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.060 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.060 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.061 23:12:17 -- setup/common.sh@33 -- # echo 0 00:04:26.061 23:12:17 -- setup/common.sh@33 -- # return 0 00:04:26.061 23:12:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.061 23:12:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.061 23:12:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.061 23:12:17 -- setup/common.sh@18 -- # local node= 00:04:26.061 23:12:17 -- setup/common.sh@19 -- # local var val 00:04:26.061 23:12:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.061 23:12:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.061 23:12:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.061 23:12:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.061 23:12:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.061 23:12:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7984504 kB' 'MemAvailable: 9524128 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461692 kB' 'Inactive: 1413708 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120780 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139988 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77800 kB' 'KernelStack: 6384 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.061 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.061 23:12:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.062 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.062 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.062 23:12:17 -- setup/common.sh@33 -- # echo 0 00:04:26.062 23:12:17 -- setup/common.sh@33 -- # return 0 00:04:26.062 23:12:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.062 23:12:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.062 23:12:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.062 23:12:17 -- setup/common.sh@18 -- # local node= 00:04:26.062 23:12:17 -- setup/common.sh@19 -- # local var val 00:04:26.062 23:12:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.062 23:12:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.062 23:12:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.062 23:12:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.062 23:12:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.062 23:12:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.063 23:12:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7984504 kB' 'MemAvailable: 9524128 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461716 kB' 'Inactive: 1413708 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 140000 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77812 kB' 'KernelStack: 6384 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.063 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.063 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.064 23:12:17 -- setup/common.sh@33 -- # echo 0 00:04:26.064 23:12:17 -- setup/common.sh@33 -- # return 0 00:04:26.064 nr_hugepages=1024 00:04:26.064 resv_hugepages=0 00:04:26.064 surplus_hugepages=0 00:04:26.064 anon_hugepages=0 00:04:26.064 23:12:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.064 23:12:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.064 23:12:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.064 23:12:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.064 23:12:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.064 23:12:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.064 23:12:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.064 23:12:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.064 23:12:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.064 23:12:17 -- setup/common.sh@18 -- # local node= 00:04:26.064 23:12:17 -- setup/common.sh@19 -- # local var val 00:04:26.064 23:12:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.064 23:12:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.064 23:12:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.064 23:12:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.064 23:12:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.064 23:12:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7984504 kB' 'MemAvailable: 9524128 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461608 kB' 'Inactive: 1413708 kB' 'Active(anon): 129624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120696 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139996 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77808 kB' 'KernelStack: 6384 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.064 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.064 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.065 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.065 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.066 23:12:17 -- setup/common.sh@33 -- # echo 1024 00:04:26.066 23:12:17 -- setup/common.sh@33 -- # return 0 00:04:26.066 23:12:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.066 23:12:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.066 23:12:17 -- setup/hugepages.sh@27 -- # local node 00:04:26.066 23:12:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.066 23:12:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.066 23:12:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.066 23:12:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.066 23:12:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.066 23:12:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.066 23:12:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.066 23:12:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.066 23:12:17 -- setup/common.sh@18 -- # local node=0 00:04:26.066 23:12:17 -- setup/common.sh@19 -- # local var val 00:04:26.066 23:12:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.066 23:12:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.066 23:12:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.066 23:12:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.066 23:12:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.066 23:12:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7984504 kB' 'MemUsed: 4257468 kB' 'SwapCached: 0 kB' 'Active: 461608 kB' 'Inactive: 1413708 kB' 'Active(anon): 129624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756164 kB' 'Mapped: 48624 kB' 'AnonPages: 120696 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139996 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.066 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.066 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # continue 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.067 23:12:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.067 23:12:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.067 23:12:17 -- setup/common.sh@33 -- # echo 0 00:04:26.067 23:12:17 -- setup/common.sh@33 -- # return 0 00:04:26.067 node0=1024 expecting 1024 00:04:26.067 ************************************ 00:04:26.067 END TEST default_setup 00:04:26.067 ************************************ 00:04:26.067 23:12:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.067 23:12:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.067 23:12:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.067 23:12:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.067 23:12:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.067 23:12:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.067 00:04:26.067 real 0m1.791s 00:04:26.067 user 0m0.646s 00:04:26.067 sys 0m1.122s 00:04:26.067 23:12:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.067 23:12:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.326 23:12:17 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.326 23:12:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.326 23:12:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.326 23:12:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.326 ************************************ 00:04:26.326 START TEST per_node_1G_alloc 00:04:26.326 ************************************ 00:04:26.326 23:12:17 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:26.326 23:12:17 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.326 23:12:17 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:26.326 23:12:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.326 23:12:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.326 23:12:17 -- setup/hugepages.sh@51 -- # shift 00:04:26.326 23:12:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.326 23:12:17 -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.326 23:12:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.326 23:12:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.326 23:12:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.326 23:12:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.326 23:12:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.326 23:12:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.326 23:12:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.326 23:12:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.326 23:12:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.326 23:12:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.326 23:12:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.326 23:12:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.326 23:12:17 -- setup/hugepages.sh@73 -- # return 0 00:04:26.326 23:12:17 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.326 23:12:17 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:26.326 23:12:17 -- setup/hugepages.sh@146 -- # setup output 00:04:26.326 23:12:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.326 23:12:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.156 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.156 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.156 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.156 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.156 23:12:18 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:27.156 23:12:18 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:27.156 23:12:18 -- setup/hugepages.sh@89 -- # local node 00:04:27.156 23:12:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.156 23:12:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.156 23:12:18 -- setup/hugepages.sh@92 -- # local surp 00:04:27.156 23:12:18 -- setup/hugepages.sh@93 -- # local resv 00:04:27.156 23:12:18 -- setup/hugepages.sh@94 -- # local anon 00:04:27.156 23:12:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.156 23:12:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.156 23:12:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.156 23:12:18 -- setup/common.sh@18 -- # local node= 00:04:27.156 23:12:18 -- setup/common.sh@19 -- # local var val 00:04:27.156 23:12:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.156 23:12:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.156 23:12:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.156 23:12:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.156 23:12:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.156 23:12:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9038160 kB' 'MemAvailable: 10577784 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 462080 kB' 'Inactive: 1413708 kB' 'Active(anon): 130096 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121116 kB' 'Mapped: 48664 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 140052 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77864 kB' 'KernelStack: 6432 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.156 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.156 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.157 23:12:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.157 23:12:18 -- setup/common.sh@33 -- # echo 0 00:04:27.157 23:12:18 -- setup/common.sh@33 -- # return 0 00:04:27.157 23:12:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:27.157 23:12:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.157 23:12:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.157 23:12:18 -- setup/common.sh@18 -- # local node= 00:04:27.157 23:12:18 -- setup/common.sh@19 -- # local var val 00:04:27.157 23:12:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.157 23:12:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.157 23:12:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.157 23:12:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.157 23:12:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.157 23:12:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.157 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9038680 kB' 'MemAvailable: 10578304 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461856 kB' 'Inactive: 1413708 kB' 'Active(anon): 129872 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48564 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 140052 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77864 kB' 'KernelStack: 6396 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.158 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.158 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.159 23:12:18 -- setup/common.sh@33 -- # echo 0 00:04:27.159 23:12:18 -- setup/common.sh@33 -- # return 0 00:04:27.159 23:12:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.159 23:12:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.159 23:12:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.159 23:12:18 -- setup/common.sh@18 -- # local node= 00:04:27.159 23:12:18 -- setup/common.sh@19 -- # local var val 00:04:27.159 23:12:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.159 23:12:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.159 23:12:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.159 23:12:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.159 23:12:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.159 23:12:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9038680 kB' 'MemAvailable: 10578304 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461972 kB' 'Inactive: 1413708 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121428 kB' 'Mapped: 49084 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 140048 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77860 kB' 'KernelStack: 6460 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.159 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.159 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.160 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.160 23:12:18 -- setup/common.sh@33 -- # echo 0 00:04:27.160 23:12:18 -- setup/common.sh@33 -- # return 0 00:04:27.160 23:12:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.160 nr_hugepages=512 00:04:27.160 23:12:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.160 resv_hugepages=0 00:04:27.160 23:12:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.160 surplus_hugepages=0 00:04:27.160 23:12:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.160 anon_hugepages=0 00:04:27.160 23:12:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.160 23:12:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.160 23:12:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.160 23:12:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.160 23:12:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.160 23:12:18 -- setup/common.sh@18 -- # local node= 00:04:27.160 23:12:18 -- setup/common.sh@19 -- # local var val 00:04:27.160 23:12:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.160 23:12:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.160 23:12:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.160 23:12:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.160 23:12:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.160 23:12:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.160 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9039200 kB' 'MemAvailable: 10578824 kB' 'Buffers: 2436 kB' 'Cached: 1753728 kB' 'SwapCached: 0 kB' 'Active: 461844 kB' 'Inactive: 1413708 kB' 'Active(anon): 129860 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 48824 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 140040 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77852 kB' 'KernelStack: 6428 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.161 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.161 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.162 23:12:18 -- setup/common.sh@33 -- # echo 512 00:04:27.162 23:12:18 -- setup/common.sh@33 -- # return 0 00:04:27.162 23:12:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.162 23:12:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.162 23:12:18 -- setup/hugepages.sh@27 -- # local node 00:04:27.162 23:12:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.162 23:12:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.162 23:12:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.162 23:12:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.162 23:12:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.162 23:12:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.162 23:12:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.162 23:12:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.162 23:12:18 -- setup/common.sh@18 -- # local node=0 00:04:27.162 23:12:18 -- setup/common.sh@19 -- # local var val 00:04:27.162 23:12:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.162 23:12:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.162 23:12:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.162 23:12:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.162 23:12:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.162 23:12:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9039200 kB' 'MemUsed: 3202772 kB' 'SwapCached: 0 kB' 'Active: 461456 kB' 'Inactive: 1413708 kB' 'Active(anon): 129472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413708 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756164 kB' 'Mapped: 48824 kB' 'AnonPages: 120612 kB' 'Shmem: 10472 kB' 'KernelStack: 6412 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 140044 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.162 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.162 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # continue 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.163 23:12:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.163 23:12:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.163 23:12:18 -- setup/common.sh@33 -- # echo 0 00:04:27.163 23:12:18 -- setup/common.sh@33 -- # return 0 00:04:27.163 23:12:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.163 23:12:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.163 23:12:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.163 23:12:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.163 node0=512 expecting 512 00:04:27.163 23:12:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.163 23:12:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.163 00:04:27.163 real 0m1.027s 00:04:27.163 user 0m0.399s 00:04:27.163 sys 0m0.691s 00:04:27.163 23:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.163 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:27.163 ************************************ 00:04:27.163 END TEST per_node_1G_alloc 00:04:27.163 ************************************ 00:04:27.423 23:12:18 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:27.423 23:12:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.423 23:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.423 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:04:27.423 ************************************ 00:04:27.423 START TEST even_2G_alloc 00:04:27.423 ************************************ 00:04:27.423 23:12:18 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:27.423 23:12:18 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:27.423 23:12:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.423 23:12:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.423 23:12:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.423 23:12:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.423 23:12:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.423 23:12:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.423 23:12:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.423 23:12:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.423 23:12:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.423 23:12:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:27.423 23:12:18 -- setup/hugepages.sh@83 -- # : 0 00:04:27.423 23:12:18 -- setup/hugepages.sh@84 -- # : 0 00:04:27.423 23:12:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.423 23:12:18 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:27.423 23:12:18 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:27.423 23:12:18 -- setup/hugepages.sh@153 -- # setup output 00:04:27.423 23:12:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.423 23:12:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.255 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.255 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.255 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.255 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:28.255 23:12:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:28.255 23:12:19 -- setup/hugepages.sh@89 -- # local node 00:04:28.255 23:12:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.255 23:12:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.255 23:12:19 -- setup/hugepages.sh@92 -- # local surp 00:04:28.255 23:12:19 -- setup/hugepages.sh@93 -- # local resv 00:04:28.255 23:12:19 -- setup/hugepages.sh@94 -- # local anon 00:04:28.255 23:12:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.255 23:12:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.255 23:12:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.255 23:12:19 -- setup/common.sh@18 -- # local node= 00:04:28.255 23:12:19 -- setup/common.sh@19 -- # local var val 00:04:28.255 23:12:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.255 23:12:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.255 23:12:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.255 23:12:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.255 23:12:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.255 23:12:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7987352 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1753732 kB' 'SwapCached: 0 kB' 'Active: 461752 kB' 'Inactive: 1413712 kB' 'Active(anon): 129768 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120896 kB' 'Mapped: 48712 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139960 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77772 kB' 'KernelStack: 6416 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.255 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.255 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.256 23:12:19 -- setup/common.sh@33 -- # echo 0 00:04:28.256 23:12:19 -- setup/common.sh@33 -- # return 0 00:04:28.256 23:12:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:28.256 23:12:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.256 23:12:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.256 23:12:19 -- setup/common.sh@18 -- # local node= 00:04:28.256 23:12:19 -- setup/common.sh@19 -- # local var val 00:04:28.256 23:12:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.256 23:12:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.256 23:12:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.256 23:12:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.256 23:12:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.256 23:12:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7987352 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 461868 kB' 'Inactive: 1413712 kB' 'Active(anon): 129884 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121124 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139972 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77784 kB' 'KernelStack: 6432 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.256 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.256 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.257 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.257 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.257 23:12:19 -- setup/common.sh@33 -- # echo 0 00:04:28.257 23:12:19 -- setup/common.sh@33 -- # return 0 00:04:28.257 23:12:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:28.257 23:12:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.257 23:12:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.257 23:12:19 -- setup/common.sh@18 -- # local node= 00:04:28.257 23:12:19 -- setup/common.sh@19 -- # local var val 00:04:28.257 23:12:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.257 23:12:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.257 23:12:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.257 23:12:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.257 23:12:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.257 23:12:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7987352 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 461448 kB' 'Inactive: 1413712 kB' 'Active(anon): 129464 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120672 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139956 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77768 kB' 'KernelStack: 6400 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.258 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.258 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.259 23:12:19 -- setup/common.sh@33 -- # echo 0 00:04:28.259 23:12:19 -- setup/common.sh@33 -- # return 0 00:04:28.259 23:12:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:28.259 nr_hugepages=1024 00:04:28.259 23:12:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:28.259 resv_hugepages=0 00:04:28.259 23:12:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.259 surplus_hugepages=0 00:04:28.259 23:12:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.259 anon_hugepages=0 00:04:28.259 23:12:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.259 23:12:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.259 23:12:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:28.259 23:12:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.259 23:12:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.259 23:12:19 -- setup/common.sh@18 -- # local node= 00:04:28.259 23:12:19 -- setup/common.sh@19 -- # local var val 00:04:28.259 23:12:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.259 23:12:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.259 23:12:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.259 23:12:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.259 23:12:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.259 23:12:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.259 23:12:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7987352 kB' 'MemAvailable: 9526980 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 461692 kB' 'Inactive: 1413712 kB' 'Active(anon): 129708 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 120912 kB' 'Mapped: 48624 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139956 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77768 kB' 'KernelStack: 6400 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 353272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.259 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.259 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.260 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.260 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.261 23:12:19 -- setup/common.sh@33 -- # echo 1024 00:04:28.261 23:12:19 -- setup/common.sh@33 -- # return 0 00:04:28.261 23:12:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:28.261 23:12:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.261 23:12:19 -- setup/hugepages.sh@27 -- # local node 00:04:28.261 23:12:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.261 23:12:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.261 23:12:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.261 23:12:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.261 23:12:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.261 23:12:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.261 23:12:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.261 23:12:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.261 23:12:19 -- setup/common.sh@18 -- # local node=0 00:04:28.261 23:12:19 -- setup/common.sh@19 -- # local var val 00:04:28.261 23:12:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.261 23:12:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.261 23:12:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.261 23:12:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.261 23:12:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.261 23:12:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.261 23:12:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7987352 kB' 'MemUsed: 4254620 kB' 'SwapCached: 0 kB' 'Active: 461828 kB' 'Inactive: 1413712 kB' 'Active(anon): 129844 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413712 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756172 kB' 'Mapped: 48624 kB' 'AnonPages: 121008 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139956 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.261 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.261 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # continue 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.262 23:12:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.262 23:12:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.262 23:12:19 -- setup/common.sh@33 -- # echo 0 00:04:28.262 23:12:19 -- setup/common.sh@33 -- # return 0 00:04:28.262 23:12:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.262 23:12:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.262 23:12:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.262 23:12:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.262 node0=1024 expecting 1024 00:04:28.262 23:12:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:28.262 23:12:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:28.262 00:04:28.262 real 0m1.012s 00:04:28.262 user 0m0.393s 00:04:28.262 sys 0m0.692s 00:04:28.262 23:12:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.262 23:12:19 -- common/autotest_common.sh@10 -- # set +x 00:04:28.262 ************************************ 00:04:28.262 END TEST even_2G_alloc 00:04:28.262 ************************************ 00:04:28.522 23:12:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:28.522 23:12:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.522 23:12:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.522 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:28.522 ************************************ 00:04:28.522 START TEST odd_alloc 00:04:28.522 ************************************ 00:04:28.522 23:12:20 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:28.522 23:12:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:28.522 23:12:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:28.522 23:12:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:28.522 23:12:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:28.522 23:12:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:28.522 23:12:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.522 23:12:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:28.522 23:12:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.522 23:12:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.522 23:12:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.522 23:12:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:28.522 23:12:20 -- setup/hugepages.sh@83 -- # : 0 00:04:28.522 23:12:20 -- setup/hugepages.sh@84 -- # : 0 00:04:28.522 23:12:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:28.522 23:12:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:28.522 23:12:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:28.522 23:12:20 -- setup/hugepages.sh@160 -- # setup output 00:04:28.522 23:12:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.522 23:12:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.355 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.355 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.355 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.355 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.355 23:12:20 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:29.355 23:12:20 -- setup/hugepages.sh@89 -- # local node 00:04:29.355 23:12:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.355 23:12:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.355 23:12:20 -- setup/hugepages.sh@92 -- # local surp 00:04:29.355 23:12:20 -- setup/hugepages.sh@93 -- # local resv 00:04:29.355 23:12:20 -- setup/hugepages.sh@94 -- # local anon 00:04:29.355 23:12:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.355 23:12:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.355 23:12:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.355 23:12:20 -- setup/common.sh@18 -- # local node= 00:04:29.355 23:12:20 -- setup/common.sh@19 -- # local var val 00:04:29.355 23:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.355 23:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.355 23:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.355 23:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.355 23:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.355 23:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7983196 kB' 'MemAvailable: 9522828 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 458140 kB' 'Inactive: 1413716 kB' 'Active(anon): 126156 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 117576 kB' 'Mapped: 48036 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139924 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77736 kB' 'KernelStack: 6384 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.355 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.355 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.356 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.356 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.357 23:12:20 -- setup/common.sh@33 -- # echo 0 00:04:29.357 23:12:20 -- setup/common.sh@33 -- # return 0 00:04:29.357 23:12:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:29.357 23:12:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.357 23:12:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.357 23:12:20 -- setup/common.sh@18 -- # local node= 00:04:29.357 23:12:20 -- setup/common.sh@19 -- # local var val 00:04:29.357 23:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.357 23:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.357 23:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.357 23:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.357 23:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.357 23:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994724 kB' 'MemAvailable: 9534356 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457664 kB' 'Inactive: 1413716 kB' 'Active(anon): 125680 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116812 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139772 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77584 kB' 'KernelStack: 6304 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.357 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.357 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.358 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.358 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.359 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.359 23:12:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.359 23:12:20 -- setup/common.sh@33 -- # echo 0 00:04:29.359 23:12:20 -- setup/common.sh@33 -- # return 0 00:04:29.359 23:12:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:29.359 23:12:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.359 23:12:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.359 23:12:20 -- setup/common.sh@18 -- # local node= 00:04:29.359 23:12:20 -- setup/common.sh@19 -- # local var val 00:04:29.359 23:12:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.359 23:12:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.359 23:12:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.359 23:12:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.359 23:12:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.359 23:12:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.360 23:12:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994724 kB' 'MemAvailable: 9534356 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457664 kB' 'Inactive: 1413716 kB' 'Active(anon): 125680 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116812 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139772 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77584 kB' 'KernelStack: 6304 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.360 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.360 23:12:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:20 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.361 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.361 23:12:21 -- setup/common.sh@33 -- # echo 0 00:04:29.361 23:12:21 -- setup/common.sh@33 -- # return 0 00:04:29.361 23:12:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:29.361 nr_hugepages=1025 00:04:29.361 23:12:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:29.361 resv_hugepages=0 00:04:29.361 23:12:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.361 surplus_hugepages=0 00:04:29.361 23:12:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.361 anon_hugepages=0 00:04:29.361 23:12:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.361 23:12:21 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:29.361 23:12:21 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:29.361 23:12:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.361 23:12:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.361 23:12:21 -- setup/common.sh@18 -- # local node= 00:04:29.361 23:12:21 -- setup/common.sh@19 -- # local var val 00:04:29.361 23:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.361 23:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.361 23:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.361 23:12:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.361 23:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.361 23:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.361 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994472 kB' 'MemAvailable: 9534104 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457552 kB' 'Inactive: 1413716 kB' 'Active(anon): 125568 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 116932 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139772 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77584 kB' 'KernelStack: 6272 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.362 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.362 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.363 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.363 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.364 23:12:21 -- setup/common.sh@33 -- # echo 1025 00:04:29.364 23:12:21 -- setup/common.sh@33 -- # return 0 00:04:29.364 23:12:21 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:29.364 23:12:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.364 23:12:21 -- setup/hugepages.sh@27 -- # local node 00:04:29.364 23:12:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.364 23:12:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:29.364 23:12:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.364 23:12:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.364 23:12:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.364 23:12:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.364 23:12:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.364 23:12:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.364 23:12:21 -- setup/common.sh@18 -- # local node=0 00:04:29.364 23:12:21 -- setup/common.sh@19 -- # local var val 00:04:29.364 23:12:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.364 23:12:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.364 23:12:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.364 23:12:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.364 23:12:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.364 23:12:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7994472 kB' 'MemUsed: 4247500 kB' 'SwapCached: 0 kB' 'Active: 457552 kB' 'Inactive: 1413716 kB' 'Active(anon): 125568 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1756172 kB' 'Mapped: 47884 kB' 'AnonPages: 116672 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139772 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.364 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.364 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.365 23:12:21 -- setup/common.sh@32 -- # continue 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.365 23:12:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.366 23:12:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.366 23:12:21 -- setup/common.sh@33 -- # echo 0 00:04:29.366 23:12:21 -- setup/common.sh@33 -- # return 0 00:04:29.366 23:12:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.366 23:12:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.366 23:12:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.366 23:12:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.366 23:12:21 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:29.366 node0=1025 expecting 1025 00:04:29.366 23:12:21 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:29.366 00:04:29.366 real 0m1.016s 00:04:29.366 user 0m0.412s 00:04:29.366 sys 0m0.673s 00:04:29.366 23:12:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.366 23:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.366 ************************************ 00:04:29.366 END TEST odd_alloc 00:04:29.366 ************************************ 00:04:29.626 23:12:21 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:29.626 23:12:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.626 23:12:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.626 23:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:29.626 ************************************ 00:04:29.626 START TEST custom_alloc 00:04:29.626 ************************************ 00:04:29.626 23:12:21 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:29.626 23:12:21 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:29.626 23:12:21 -- setup/hugepages.sh@169 -- # local node 00:04:29.626 23:12:21 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:29.626 23:12:21 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:29.626 23:12:21 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:29.626 23:12:21 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:29.626 23:12:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:29.626 23:12:21 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.626 23:12:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.626 23:12:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.626 23:12:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.626 23:12:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.626 23:12:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.626 23:12:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@83 -- # : 0 00:04:29.626 23:12:21 -- setup/hugepages.sh@84 -- # : 0 00:04:29.626 23:12:21 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:29.626 23:12:21 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:29.626 23:12:21 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:29.626 23:12:21 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.626 23:12:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.626 23:12:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.626 23:12:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.626 23:12:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.626 23:12:21 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:29.626 23:12:21 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:29.626 23:12:21 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:29.626 23:12:21 -- setup/hugepages.sh@78 -- # return 0 00:04:29.626 23:12:21 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:29.626 23:12:21 -- setup/hugepages.sh@187 -- # setup output 00:04:29.626 23:12:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.626 23:12:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.459 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.459 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.459 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.459 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.459 23:12:22 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:30.459 23:12:22 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:30.459 23:12:22 -- setup/hugepages.sh@89 -- # local node 00:04:30.459 23:12:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.459 23:12:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.459 23:12:22 -- setup/hugepages.sh@92 -- # local surp 00:04:30.459 23:12:22 -- setup/hugepages.sh@93 -- # local resv 00:04:30.459 23:12:22 -- setup/hugepages.sh@94 -- # local anon 00:04:30.459 23:12:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.459 23:12:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.459 23:12:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.459 23:12:22 -- setup/common.sh@18 -- # local node= 00:04:30.459 23:12:22 -- setup/common.sh@19 -- # local var val 00:04:30.459 23:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.459 23:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.459 23:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.459 23:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.459 23:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.459 23:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048008 kB' 'MemAvailable: 10587640 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 458352 kB' 'Inactive: 1413716 kB' 'Active(anon): 126368 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117472 kB' 'Mapped: 47976 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139648 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77460 kB' 'KernelStack: 6304 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.459 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.459 23:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.460 23:12:22 -- setup/common.sh@33 -- # echo 0 00:04:30.460 23:12:22 -- setup/common.sh@33 -- # return 0 00:04:30.460 23:12:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.460 23:12:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.460 23:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.460 23:12:22 -- setup/common.sh@18 -- # local node= 00:04:30.460 23:12:22 -- setup/common.sh@19 -- # local var val 00:04:30.460 23:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.460 23:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.460 23:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.460 23:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.460 23:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.460 23:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.460 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.460 23:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048008 kB' 'MemAvailable: 10587640 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457680 kB' 'Inactive: 1413716 kB' 'Active(anon): 125696 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116844 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139652 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6304 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.461 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.461 23:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.462 23:12:22 -- setup/common.sh@33 -- # echo 0 00:04:30.462 23:12:22 -- setup/common.sh@33 -- # return 0 00:04:30.462 23:12:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.462 23:12:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.462 23:12:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.462 23:12:22 -- setup/common.sh@18 -- # local node= 00:04:30.462 23:12:22 -- setup/common.sh@19 -- # local var val 00:04:30.462 23:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.462 23:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.462 23:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.462 23:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.462 23:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.462 23:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048008 kB' 'MemAvailable: 10587640 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457724 kB' 'Inactive: 1413716 kB' 'Active(anon): 125740 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116880 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139652 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6320 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.462 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.462 23:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.463 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.463 23:12:22 -- setup/common.sh@33 -- # echo 0 00:04:30.463 23:12:22 -- setup/common.sh@33 -- # return 0 00:04:30.463 23:12:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.463 nr_hugepages=512 00:04:30.463 23:12:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:30.463 resv_hugepages=0 00:04:30.463 23:12:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.463 surplus_hugepages=0 00:04:30.463 23:12:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.463 anon_hugepages=0 00:04:30.463 23:12:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.463 23:12:22 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.463 23:12:22 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:30.463 23:12:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.463 23:12:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.463 23:12:22 -- setup/common.sh@18 -- # local node= 00:04:30.463 23:12:22 -- setup/common.sh@19 -- # local var val 00:04:30.463 23:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.463 23:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.463 23:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.463 23:12:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.463 23:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.463 23:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.463 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048008 kB' 'MemAvailable: 10587640 kB' 'Buffers: 2436 kB' 'Cached: 1753736 kB' 'SwapCached: 0 kB' 'Active: 457688 kB' 'Inactive: 1413716 kB' 'Active(anon): 125704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116844 kB' 'Mapped: 47884 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139652 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77464 kB' 'KernelStack: 6304 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.464 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.464 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.465 23:12:22 -- setup/common.sh@33 -- # echo 512 00:04:30.465 23:12:22 -- setup/common.sh@33 -- # return 0 00:04:30.465 23:12:22 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:30.465 23:12:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.465 23:12:22 -- setup/hugepages.sh@27 -- # local node 00:04:30.465 23:12:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.465 23:12:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.465 23:12:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.465 23:12:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.465 23:12:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.465 23:12:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.465 23:12:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.465 23:12:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.465 23:12:22 -- setup/common.sh@18 -- # local node=0 00:04:30.465 23:12:22 -- setup/common.sh@19 -- # local var val 00:04:30.465 23:12:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.465 23:12:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.465 23:12:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.465 23:12:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.465 23:12:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.465 23:12:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9048008 kB' 'MemUsed: 3193964 kB' 'SwapCached: 0 kB' 'Active: 457628 kB' 'Inactive: 1413716 kB' 'Active(anon): 125644 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413716 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756172 kB' 'Mapped: 47884 kB' 'AnonPages: 117000 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139648 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.465 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.465 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # continue 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.466 23:12:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.466 23:12:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.466 23:12:22 -- setup/common.sh@33 -- # echo 0 00:04:30.466 23:12:22 -- setup/common.sh@33 -- # return 0 00:04:30.466 23:12:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.466 23:12:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.466 23:12:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.466 23:12:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.466 node0=512 expecting 512 00:04:30.466 23:12:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.466 23:12:22 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:30.466 00:04:30.466 real 0m1.027s 00:04:30.466 user 0m0.422s 00:04:30.466 sys 0m0.679s 00:04:30.466 23:12:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.466 23:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.466 ************************************ 00:04:30.466 END TEST custom_alloc 00:04:30.466 ************************************ 00:04:30.726 23:12:22 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:30.726 23:12:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.726 23:12:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.726 23:12:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.726 ************************************ 00:04:30.726 START TEST no_shrink_alloc 00:04:30.726 ************************************ 00:04:30.726 23:12:22 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:30.726 23:12:22 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:30.726 23:12:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.726 23:12:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.726 23:12:22 -- setup/hugepages.sh@51 -- # shift 00:04:30.726 23:12:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.726 23:12:22 -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.726 23:12:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.726 23:12:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.726 23:12:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.726 23:12:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.726 23:12:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.726 23:12:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.726 23:12:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.726 23:12:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.726 23:12:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.726 23:12:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.726 23:12:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.726 23:12:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:30.726 23:12:22 -- setup/hugepages.sh@73 -- # return 0 00:04:30.726 23:12:22 -- setup/hugepages.sh@198 -- # setup output 00:04:30.726 23:12:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.727 23:12:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.560 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.560 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.560 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.560 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.560 23:12:23 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:31.560 23:12:23 -- setup/hugepages.sh@89 -- # local node 00:04:31.560 23:12:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.560 23:12:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.560 23:12:23 -- setup/hugepages.sh@92 -- # local surp 00:04:31.560 23:12:23 -- setup/hugepages.sh@93 -- # local resv 00:04:31.560 23:12:23 -- setup/hugepages.sh@94 -- # local anon 00:04:31.560 23:12:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.560 23:12:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.560 23:12:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.560 23:12:23 -- setup/common.sh@18 -- # local node= 00:04:31.560 23:12:23 -- setup/common.sh@19 -- # local var val 00:04:31.560 23:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.560 23:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.560 23:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.560 23:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.560 23:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.560 23:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.560 23:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7997220 kB' 'MemAvailable: 9536856 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 457732 kB' 'Inactive: 1413720 kB' 'Active(anon): 125748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117108 kB' 'Mapped: 48016 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139640 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77452 kB' 'KernelStack: 6300 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.560 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.560 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.561 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.561 23:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.561 23:12:23 -- setup/common.sh@33 -- # echo 0 00:04:31.561 23:12:23 -- setup/common.sh@33 -- # return 0 00:04:31.561 23:12:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:31.561 23:12:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.561 23:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.561 23:12:23 -- setup/common.sh@18 -- # local node= 00:04:31.561 23:12:23 -- setup/common.sh@19 -- # local var val 00:04:31.561 23:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.561 23:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.561 23:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.561 23:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.561 23:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.562 23:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7997220 kB' 'MemAvailable: 9536856 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 457928 kB' 'Inactive: 1413720 kB' 'Active(anon): 125944 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117084 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139660 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6288 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.562 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.562 23:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.563 23:12:23 -- setup/common.sh@33 -- # echo 0 00:04:31.563 23:12:23 -- setup/common.sh@33 -- # return 0 00:04:31.563 23:12:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:31.563 23:12:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.563 23:12:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.563 23:12:23 -- setup/common.sh@18 -- # local node= 00:04:31.563 23:12:23 -- setup/common.sh@19 -- # local var val 00:04:31.563 23:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.563 23:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.563 23:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.563 23:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.563 23:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.563 23:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7997220 kB' 'MemAvailable: 9536856 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 457724 kB' 'Inactive: 1413720 kB' 'Active(anon): 125740 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117116 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139660 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6304 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.563 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.563 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.564 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.564 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.564 23:12:23 -- setup/common.sh@33 -- # echo 0 00:04:31.564 23:12:23 -- setup/common.sh@33 -- # return 0 00:04:31.564 nr_hugepages=1024 00:04:31.564 resv_hugepages=0 00:04:31.564 surplus_hugepages=0 00:04:31.564 anon_hugepages=0 00:04:31.564 23:12:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:31.564 23:12:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.564 23:12:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.564 23:12:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.564 23:12:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.564 23:12:23 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.564 23:12:23 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.564 23:12:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.564 23:12:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.564 23:12:23 -- setup/common.sh@18 -- # local node= 00:04:31.564 23:12:23 -- setup/common.sh@19 -- # local var val 00:04:31.564 23:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.564 23:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.564 23:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.565 23:12:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.565 23:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.565 23:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.565 23:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7997220 kB' 'MemAvailable: 9536856 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 457644 kB' 'Inactive: 1413720 kB' 'Active(anon): 125660 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117024 kB' 'Mapped: 47888 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139656 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77468 kB' 'KernelStack: 6288 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.565 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.565 23:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.566 23:12:23 -- setup/common.sh@33 -- # echo 1024 00:04:31.566 23:12:23 -- setup/common.sh@33 -- # return 0 00:04:31.566 23:12:23 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.566 23:12:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.566 23:12:23 -- setup/hugepages.sh@27 -- # local node 00:04:31.566 23:12:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.566 23:12:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.566 23:12:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.566 23:12:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.566 23:12:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.566 23:12:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.566 23:12:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.566 23:12:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.566 23:12:23 -- setup/common.sh@18 -- # local node=0 00:04:31.566 23:12:23 -- setup/common.sh@19 -- # local var val 00:04:31.566 23:12:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.566 23:12:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.566 23:12:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.566 23:12:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.566 23:12:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.566 23:12:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7997220 kB' 'MemUsed: 4244752 kB' 'SwapCached: 0 kB' 'Active: 457648 kB' 'Inactive: 1413720 kB' 'Active(anon): 125664 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756176 kB' 'Mapped: 47888 kB' 'AnonPages: 117020 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139656 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.566 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.566 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.567 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.567 23:12:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # continue 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.827 23:12:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.827 23:12:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.827 23:12:23 -- setup/common.sh@33 -- # echo 0 00:04:31.827 23:12:23 -- setup/common.sh@33 -- # return 0 00:04:31.827 node0=1024 expecting 1024 00:04:31.827 23:12:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.827 23:12:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.827 23:12:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.827 23:12:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.827 23:12:23 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.827 23:12:23 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.827 23:12:23 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:31.827 23:12:23 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:31.827 23:12:23 -- setup/hugepages.sh@202 -- # setup output 00:04:31.827 23:12:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.827 23:12:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.396 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.396 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.396 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.396 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.396 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.659 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:32.659 23:12:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:32.659 23:12:24 -- setup/hugepages.sh@89 -- # local node 00:04:32.659 23:12:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.659 23:12:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.659 23:12:24 -- setup/hugepages.sh@92 -- # local surp 00:04:32.659 23:12:24 -- setup/hugepages.sh@93 -- # local resv 00:04:32.659 23:12:24 -- setup/hugepages.sh@94 -- # local anon 00:04:32.659 23:12:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.659 23:12:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.659 23:12:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.659 23:12:24 -- setup/common.sh@18 -- # local node= 00:04:32.659 23:12:24 -- setup/common.sh@19 -- # local var val 00:04:32.659 23:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.659 23:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.659 23:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.659 23:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.659 23:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.659 23:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7991008 kB' 'MemAvailable: 9530644 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 458908 kB' 'Inactive: 1413720 kB' 'Active(anon): 126924 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 48072 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139660 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77472 kB' 'KernelStack: 6488 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.659 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.659 23:12:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.660 23:12:24 -- setup/common.sh@33 -- # echo 0 00:04:32.660 23:12:24 -- setup/common.sh@33 -- # return 0 00:04:32.660 23:12:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.660 23:12:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.660 23:12:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.660 23:12:24 -- setup/common.sh@18 -- # local node= 00:04:32.660 23:12:24 -- setup/common.sh@19 -- # local var val 00:04:32.660 23:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.660 23:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.660 23:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.660 23:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.660 23:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.660 23:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7991920 kB' 'MemAvailable: 9531556 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 458000 kB' 'Inactive: 1413720 kB' 'Active(anon): 126016 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117352 kB' 'Mapped: 47868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139676 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6364 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.660 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.660 23:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.661 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.661 23:12:24 -- setup/common.sh@33 -- # echo 0 00:04:32.661 23:12:24 -- setup/common.sh@33 -- # return 0 00:04:32.661 23:12:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.661 23:12:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.661 23:12:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.661 23:12:24 -- setup/common.sh@18 -- # local node= 00:04:32.661 23:12:24 -- setup/common.sh@19 -- # local var val 00:04:32.661 23:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.661 23:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.661 23:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.661 23:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.661 23:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.661 23:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.661 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7991920 kB' 'MemAvailable: 9531556 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 458004 kB' 'Inactive: 1413720 kB' 'Active(anon): 126020 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117352 kB' 'Mapped: 47868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139676 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6364 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.662 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.662 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.663 23:12:24 -- setup/common.sh@33 -- # echo 0 00:04:32.663 23:12:24 -- setup/common.sh@33 -- # return 0 00:04:32.663 nr_hugepages=1024 00:04:32.663 resv_hugepages=0 00:04:32.663 surplus_hugepages=0 00:04:32.663 anon_hugepages=0 00:04:32.663 23:12:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.663 23:12:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.663 23:12:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.663 23:12:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.663 23:12:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.663 23:12:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.663 23:12:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.663 23:12:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.663 23:12:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.663 23:12:24 -- setup/common.sh@18 -- # local node= 00:04:32.663 23:12:24 -- setup/common.sh@19 -- # local var val 00:04:32.663 23:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.663 23:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.663 23:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.663 23:12:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.663 23:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.663 23:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7992440 kB' 'MemAvailable: 9532076 kB' 'Buffers: 2436 kB' 'Cached: 1753740 kB' 'SwapCached: 0 kB' 'Active: 457740 kB' 'Inactive: 1413720 kB' 'Active(anon): 125756 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117088 kB' 'Mapped: 47868 kB' 'Shmem: 10472 kB' 'KReclaimable: 62188 kB' 'Slab: 139672 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77484 kB' 'KernelStack: 6296 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.663 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.663 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.664 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.664 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.665 23:12:24 -- setup/common.sh@33 -- # echo 1024 00:04:32.665 23:12:24 -- setup/common.sh@33 -- # return 0 00:04:32.665 23:12:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.665 23:12:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.665 23:12:24 -- setup/hugepages.sh@27 -- # local node 00:04:32.665 23:12:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.665 23:12:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.665 23:12:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.665 23:12:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.665 23:12:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.665 23:12:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.665 23:12:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.665 23:12:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.665 23:12:24 -- setup/common.sh@18 -- # local node=0 00:04:32.665 23:12:24 -- setup/common.sh@19 -- # local var val 00:04:32.665 23:12:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.665 23:12:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.665 23:12:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.665 23:12:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.665 23:12:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.665 23:12:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7992440 kB' 'MemUsed: 4249532 kB' 'SwapCached: 0 kB' 'Active: 457740 kB' 'Inactive: 1413720 kB' 'Active(anon): 125756 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413720 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756176 kB' 'Mapped: 47868 kB' 'AnonPages: 117088 kB' 'Shmem: 10472 kB' 'KernelStack: 6296 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62188 kB' 'Slab: 139664 kB' 'SReclaimable: 62188 kB' 'SUnreclaim: 77476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.665 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.665 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # continue 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.666 23:12:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.666 23:12:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.666 23:12:24 -- setup/common.sh@33 -- # echo 0 00:04:32.666 23:12:24 -- setup/common.sh@33 -- # return 0 00:04:32.666 23:12:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.666 23:12:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.666 23:12:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.666 23:12:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.666 node0=1024 expecting 1024 00:04:32.666 23:12:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.666 23:12:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.666 00:04:32.666 real 0m2.134s 00:04:32.666 user 0m0.862s 00:04:32.666 sys 0m1.349s 00:04:32.666 23:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.666 ************************************ 00:04:32.666 END TEST no_shrink_alloc 00:04:32.666 ************************************ 00:04:32.666 23:12:24 -- common/autotest_common.sh@10 -- # set +x 00:04:32.925 23:12:24 -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.925 23:12:24 -- setup/hugepages.sh@37 -- # local node hp 00:04:32.925 23:12:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.925 23:12:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.925 23:12:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:32.925 23:12:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.925 23:12:24 -- setup/hugepages.sh@41 -- # echo 0 00:04:32.925 23:12:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.925 23:12:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.925 ************************************ 00:04:32.925 END TEST hugepages 00:04:32.925 ************************************ 00:04:32.925 00:04:32.925 real 0m8.656s 00:04:32.925 user 0m3.346s 00:04:32.925 sys 0m5.614s 00:04:32.925 23:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.925 23:12:24 -- common/autotest_common.sh@10 -- # set +x 00:04:32.925 23:12:24 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:32.925 23:12:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.925 23:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.925 23:12:24 -- common/autotest_common.sh@10 -- # set +x 00:04:32.925 ************************************ 00:04:32.925 START TEST driver 00:04:32.925 ************************************ 00:04:32.925 23:12:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:32.925 * Looking for test storage... 00:04:32.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:32.925 23:12:24 -- setup/driver.sh@68 -- # setup reset 00:04:32.925 23:12:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.925 23:12:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.492 23:12:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:39.492 23:12:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.492 23:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.492 23:12:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.492 ************************************ 00:04:39.492 START TEST guess_driver 00:04:39.492 ************************************ 00:04:39.493 23:12:31 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:39.493 23:12:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:39.493 23:12:31 -- setup/driver.sh@47 -- # local fail=0 00:04:39.493 23:12:31 -- setup/driver.sh@49 -- # pick_driver 00:04:39.493 23:12:31 -- setup/driver.sh@36 -- # vfio 00:04:39.493 23:12:31 -- setup/driver.sh@21 -- # local iommu_grups 00:04:39.493 23:12:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:39.493 23:12:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:39.493 23:12:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:39.493 23:12:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:39.493 23:12:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:39.493 23:12:31 -- setup/driver.sh@32 -- # return 1 00:04:39.493 23:12:31 -- setup/driver.sh@38 -- # uio 00:04:39.493 23:12:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:39.493 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:39.493 23:12:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:39.493 Looking for driver=uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:39.493 23:12:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:39.493 23:12:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:39.493 23:12:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:39.493 23:12:31 -- setup/driver.sh@45 -- # setup output config 00:04:39.493 23:12:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.493 23:12:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.873 23:12:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:40.873 23:12:32 -- setup/driver.sh@58 -- # continue 00:04:40.873 23:12:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.873 23:12:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.873 23:12:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:40.873 23:12:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.873 23:12:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.873 23:12:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:40.873 23:12:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.133 23:12:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.133 23:12:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.133 23:12:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.133 23:12:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.133 23:12:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:41.133 23:12:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.133 23:12:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:41.133 23:12:32 -- setup/driver.sh@65 -- # setup reset 00:04:41.133 23:12:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.133 23:12:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.703 ************************************ 00:04:47.703 END TEST guess_driver 00:04:47.703 ************************************ 00:04:47.703 00:04:47.703 real 0m8.133s 00:04:47.703 user 0m1.039s 00:04:47.703 sys 0m2.340s 00:04:47.703 23:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.703 23:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.703 ************************************ 00:04:47.703 END TEST driver 00:04:47.703 ************************************ 00:04:47.703 00:04:47.703 real 0m14.754s 00:04:47.703 user 0m1.592s 00:04:47.703 sys 0m3.666s 00:04:47.703 23:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.703 23:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.703 23:12:39 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:47.703 23:12:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.703 23:12:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.703 23:12:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.703 ************************************ 00:04:47.703 START TEST devices 00:04:47.703 ************************************ 00:04:47.703 23:12:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:47.962 * Looking for test storage... 00:04:47.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.962 23:12:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:47.962 23:12:39 -- setup/devices.sh@192 -- # setup reset 00:04:47.962 23:12:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.962 23:12:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.362 23:12:41 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.362 23:12:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:49.362 23:12:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:49.362 23:12:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.362 23:12:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:49.362 23:12:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:49.362 23:12:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.362 23:12:41 -- setup/devices.sh@196 -- # blocks=() 00:04:49.362 23:12:41 -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.362 23:12:41 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.362 23:12:41 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.362 23:12:41 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.362 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.362 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.362 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.362 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:49.362 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:49.362 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.362 23:12:41 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:49.362 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.362 No valid GPT data, bailing 00:04:49.362 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.362 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.362 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.362 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.362 23:12:41 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.362 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.362 23:12:41 -- setup/common.sh@80 -- # echo 1073741824 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:49.622 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:49.622 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:49.622 23:12:41 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:49.622 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:49.622 No valid GPT data, bailing 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.622 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:49.622 23:12:41 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:49.622 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:49.622 23:12:41 -- setup/common.sh@80 -- # echo 4294967296 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.622 23:12:41 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.622 23:12:41 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:49.622 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:49.622 23:12:41 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:49.622 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:49.622 No valid GPT data, bailing 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.622 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:49.622 23:12:41 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:49.622 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:49.622 23:12:41 -- setup/common.sh@80 -- # echo 4294967296 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.622 23:12:41 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.622 23:12:41 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:49.622 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:49.622 23:12:41 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:49.622 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:49.622 No valid GPT data, bailing 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:49.622 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.622 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:49.622 23:12:41 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:49.622 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:49.622 23:12:41 -- setup/common.sh@80 -- # echo 4294967296 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:49.622 23:12:41 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.622 23:12:41 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:49.622 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:49.622 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:49.622 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:49.622 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:49.622 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:49.622 23:12:41 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:49.622 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:49.882 No valid GPT data, bailing 00:04:49.882 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:49.882 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.882 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.882 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:49.882 23:12:41 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:49.882 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:49.882 23:12:41 -- setup/common.sh@80 -- # echo 6343335936 00:04:49.882 23:12:41 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:49.882 23:12:41 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.882 23:12:41 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:49.882 23:12:41 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.882 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:49.883 23:12:41 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:49.883 23:12:41 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:49.883 23:12:41 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:49.883 23:12:41 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:49.883 23:12:41 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:49.883 23:12:41 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:49.883 No valid GPT data, bailing 00:04:49.883 23:12:41 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:49.883 23:12:41 -- scripts/common.sh@393 -- # pt= 00:04:49.883 23:12:41 -- scripts/common.sh@394 -- # return 1 00:04:49.883 23:12:41 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:49.883 23:12:41 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:49.883 23:12:41 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:49.883 23:12:41 -- setup/common.sh@80 -- # echo 5368709120 00:04:49.883 23:12:41 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:49.883 23:12:41 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.883 23:12:41 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:49.883 23:12:41 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:49.883 23:12:41 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:49.883 23:12:41 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.883 23:12:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.883 23:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.883 23:12:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.883 ************************************ 00:04:49.883 START TEST nvme_mount 00:04:49.883 ************************************ 00:04:49.883 23:12:41 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:49.883 23:12:41 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:49.883 23:12:41 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:49.883 23:12:41 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.883 23:12:41 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.883 23:12:41 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:49.883 23:12:41 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:49.883 23:12:41 -- setup/common.sh@40 -- # local part_no=1 00:04:49.883 23:12:41 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.883 23:12:41 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.883 23:12:41 -- setup/common.sh@44 -- # parts=() 00:04:49.883 23:12:41 -- setup/common.sh@44 -- # local parts 00:04:49.883 23:12:41 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.883 23:12:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.883 23:12:41 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.883 23:12:41 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.883 23:12:41 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.883 23:12:41 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:49.883 23:12:41 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:49.883 23:12:41 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:50.821 Creating new GPT entries in memory. 00:04:50.821 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.821 other utilities. 00:04:50.821 23:12:42 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.821 23:12:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.821 23:12:42 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.821 23:12:42 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.821 23:12:42 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:52.200 Creating new GPT entries in memory. 00:04:52.200 The operation has completed successfully. 00:04:52.200 23:12:43 -- setup/common.sh@57 -- # (( part++ )) 00:04:52.200 23:12:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.200 23:12:43 -- setup/common.sh@62 -- # wait 53996 00:04:52.200 23:12:43 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.200 23:12:43 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:52.200 23:12:43 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.200 23:12:43 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:52.200 23:12:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:52.200 23:12:43 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.200 23:12:43 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.200 23:12:43 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:52.200 23:12:43 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:52.200 23:12:43 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.200 23:12:43 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.200 23:12:43 -- setup/devices.sh@53 -- # local found=0 00:04:52.200 23:12:43 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.200 23:12:43 -- setup/devices.sh@56 -- # : 00:04:52.200 23:12:43 -- setup/devices.sh@59 -- # local pci status 00:04:52.200 23:12:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.200 23:12:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:52.200 23:12:43 -- setup/devices.sh@47 -- # setup output config 00:04:52.200 23:12:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.200 23:12:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.200 23:12:43 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.200 23:12:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.459 23:12:44 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.459 23:12:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.719 23:12:44 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.719 23:12:44 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:52.719 23:12:44 -- setup/devices.sh@63 -- # found=1 00:04:52.719 23:12:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.719 23:12:44 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.719 23:12:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.978 23:12:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.978 23:12:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.978 23:12:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:52.978 23:12:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.238 23:12:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.238 23:12:44 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:53.238 23:12:44 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.238 23:12:44 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.238 23:12:44 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.238 23:12:44 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:53.238 23:12:44 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.238 23:12:44 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.238 23:12:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:53.238 23:12:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:53.238 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.238 23:12:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:53.238 23:12:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:53.497 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.497 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:53.497 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.497 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:53.497 23:12:45 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:53.497 23:12:45 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:53.497 23:12:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.497 23:12:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:53.497 23:12:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:53.498 23:12:45 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.498 23:12:45 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.498 23:12:45 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:53.498 23:12:45 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:53.498 23:12:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.498 23:12:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.498 23:12:45 -- setup/devices.sh@53 -- # local found=0 00:04:53.498 23:12:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.498 23:12:45 -- setup/devices.sh@56 -- # : 00:04:53.498 23:12:45 -- setup/devices.sh@59 -- # local pci status 00:04:53.498 23:12:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.498 23:12:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:53.498 23:12:45 -- setup/devices.sh@47 -- # setup output config 00:04:53.498 23:12:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.498 23:12:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.757 23:12:45 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.757 23:12:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.017 23:12:45 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.017 23:12:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.276 23:12:45 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.276 23:12:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:54.276 23:12:45 -- setup/devices.sh@63 -- # found=1 00:04:54.276 23:12:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.276 23:12:45 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.276 23:12:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.536 23:12:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.536 23:12:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.536 23:12:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.536 23:12:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.795 23:12:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.795 23:12:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.795 23:12:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.795 23:12:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.795 23:12:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.795 23:12:46 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.795 23:12:46 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:54.795 23:12:46 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:54.795 23:12:46 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:54.795 23:12:46 -- setup/devices.sh@50 -- # local mount_point= 00:04:54.795 23:12:46 -- setup/devices.sh@51 -- # local test_file= 00:04:54.795 23:12:46 -- setup/devices.sh@53 -- # local found=0 00:04:54.795 23:12:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.796 23:12:46 -- setup/devices.sh@59 -- # local pci status 00:04:54.796 23:12:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.796 23:12:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:54.796 23:12:46 -- setup/devices.sh@47 -- # setup output config 00:04:54.796 23:12:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.796 23:12:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.055 23:12:46 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.055 23:12:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.055 23:12:46 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.055 23:12:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.624 23:12:47 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.624 23:12:47 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:55.624 23:12:47 -- setup/devices.sh@63 -- # found=1 00:04:55.624 23:12:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.624 23:12:47 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.624 23:12:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.883 23:12:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.883 23:12:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.883 23:12:47 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.883 23:12:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.143 23:12:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.143 23:12:47 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.143 23:12:47 -- setup/devices.sh@68 -- # return 0 00:04:56.143 23:12:47 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.143 23:12:47 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.143 23:12:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:56.143 23:12:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:56.143 23:12:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:56.143 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.143 00:04:56.143 real 0m6.153s 00:04:56.143 user 0m1.457s 00:04:56.143 sys 0m2.403s 00:04:56.143 23:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.143 23:12:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.143 ************************************ 00:04:56.143 END TEST nvme_mount 00:04:56.143 ************************************ 00:04:56.143 23:12:47 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.143 23:12:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.143 23:12:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.143 23:12:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.143 ************************************ 00:04:56.143 START TEST dm_mount 00:04:56.143 ************************************ 00:04:56.143 23:12:47 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:56.143 23:12:47 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:56.143 23:12:47 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:56.143 23:12:47 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:56.143 23:12:47 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:56.143 23:12:47 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:56.143 23:12:47 -- setup/common.sh@40 -- # local part_no=2 00:04:56.143 23:12:47 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.143 23:12:47 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.143 23:12:47 -- setup/common.sh@44 -- # parts=() 00:04:56.143 23:12:47 -- setup/common.sh@44 -- # local parts 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.143 23:12:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.143 23:12:47 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.143 23:12:47 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.143 23:12:47 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.143 23:12:47 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:56.143 23:12:47 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:57.082 Creating new GPT entries in memory. 00:04:57.082 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.082 other utilities. 00:04:57.082 23:12:48 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.082 23:12:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.082 23:12:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.082 23:12:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.082 23:12:48 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:58.459 Creating new GPT entries in memory. 00:04:58.459 The operation has completed successfully. 00:04:58.459 23:12:49 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.459 23:12:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.459 23:12:49 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.459 23:12:49 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.459 23:12:49 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:59.396 The operation has completed successfully. 00:04:59.396 23:12:50 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.396 23:12:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.396 23:12:50 -- setup/common.sh@62 -- # wait 54635 00:04:59.396 23:12:50 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.396 23:12:50 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.396 23:12:50 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.396 23:12:50 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.396 23:12:50 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.396 23:12:50 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.396 23:12:50 -- setup/devices.sh@161 -- # break 00:04:59.396 23:12:50 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.396 23:12:50 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.396 23:12:50 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.396 23:12:50 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.396 23:12:50 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:59.396 23:12:50 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:59.396 23:12:50 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.396 23:12:50 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.396 23:12:50 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.396 23:12:50 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.396 23:12:50 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.396 23:12:50 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.396 23:12:50 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.396 23:12:50 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:59.396 23:12:50 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:59.396 23:12:50 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.396 23:12:50 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.396 23:12:51 -- setup/devices.sh@53 -- # local found=0 00:04:59.396 23:12:51 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.396 23:12:51 -- setup/devices.sh@56 -- # : 00:04:59.396 23:12:51 -- setup/devices.sh@59 -- # local pci status 00:04:59.396 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.396 23:12:51 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:59.396 23:12:51 -- setup/devices.sh@47 -- # setup output config 00:04:59.396 23:12:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.396 23:12:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.656 23:12:51 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:59.656 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.656 23:12:51 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:59.656 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.225 23:12:51 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.225 23:12:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:00.225 23:12:51 -- setup/devices.sh@63 -- # found=1 00:05:00.225 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.225 23:12:51 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.225 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.225 23:12:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.225 23:12:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.484 23:12:52 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.484 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.484 23:12:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.484 23:12:52 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.484 23:12:52 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.484 23:12:52 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.485 23:12:52 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.485 23:12:52 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.744 23:12:52 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:00.744 23:12:52 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:00.744 23:12:52 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:00.744 23:12:52 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.744 23:12:52 -- setup/devices.sh@51 -- # local test_file= 00:05:00.744 23:12:52 -- setup/devices.sh@53 -- # local found=0 00:05:00.744 23:12:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.744 23:12:52 -- setup/devices.sh@59 -- # local pci status 00:05:00.744 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.744 23:12:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:00.744 23:12:52 -- setup/devices.sh@47 -- # setup output config 00:05:00.744 23:12:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.744 23:12:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.744 23:12:52 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.744 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.003 23:12:52 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.003 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.262 23:12:52 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.262 23:12:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:01.262 23:12:52 -- setup/devices.sh@63 -- # found=1 00:05:01.262 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.262 23:12:52 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.262 23:12:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.522 23:12:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.522 23:12:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.522 23:12:53 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.522 23:12:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.780 23:12:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.780 23:12:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.780 23:12:53 -- setup/devices.sh@68 -- # return 0 00:05:01.780 23:12:53 -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.780 23:12:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.780 23:12:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.780 23:12:53 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.780 23:12:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:01.780 23:12:53 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:01.780 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.780 23:12:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:01.780 23:12:53 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:01.780 00:05:01.780 real 0m5.625s 00:05:01.780 user 0m0.934s 00:05:01.780 sys 0m1.619s 00:05:01.780 23:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.780 ************************************ 00:05:01.780 END TEST dm_mount 00:05:01.780 ************************************ 00:05:01.780 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:01.780 23:12:53 -- setup/devices.sh@1 -- # cleanup 00:05:01.780 23:12:53 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.780 23:12:53 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.780 23:12:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:01.780 23:12:53 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:01.780 23:12:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:01.780 23:12:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:02.039 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.039 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.039 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.039 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:02.039 23:12:53 -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.039 23:12:53 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.039 23:12:53 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.039 23:12:53 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:02.039 23:12:53 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:02.039 23:12:53 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:02.039 23:12:53 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:02.039 00:05:02.039 real 0m14.387s 00:05:02.039 user 0m3.370s 00:05:02.039 sys 0m5.327s 00:05:02.039 23:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.039 ************************************ 00:05:02.039 END TEST devices 00:05:02.039 ************************************ 00:05:02.039 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.039 ************************************ 00:05:02.039 END TEST setup.sh 00:05:02.039 ************************************ 00:05:02.039 00:05:02.039 real 0m51.993s 00:05:02.039 user 0m11.683s 00:05:02.039 sys 0m20.672s 00:05:02.039 23:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.039 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.297 23:12:53 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.297 Hugepages 00:05:02.297 node hugesize free / total 00:05:02.297 node0 1048576kB 0 / 0 00:05:02.297 node0 2048kB 2048 / 2048 00:05:02.297 00:05:02.297 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.555 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.556 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:02.814 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:02.814 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:02.814 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.814 23:12:54 -- spdk/autotest.sh@141 -- # uname -s 00:05:02.814 23:12:54 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:02.814 23:12:54 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:02.814 23:12:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.187 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.187 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.446 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.446 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.446 23:12:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.822 23:12:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.822 23:12:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.822 23:12:57 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.822 23:12:57 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.822 23:12:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.822 23:12:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.822 23:12:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.822 23:12:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.822 23:12:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.822 23:12:57 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:05.822 23:12:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:05.823 23:12:57 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.389 Waiting for block devices as requested 00:05:06.389 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.647 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.647 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.905 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.176 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:12.177 23:13:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.177 23:13:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1542 -- # continue 00:05:12.177 23:13:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.177 23:13:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1542 -- # continue 00:05:12.177 23:13:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.177 23:13:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1542 -- # continue 00:05:12.177 23:13:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:05:12.177 23:13:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:12.177 23:13:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:12.177 23:13:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:12.177 23:13:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:12.177 23:13:03 -- common/autotest_common.sh@1542 -- # continue 00:05:12.177 23:13:03 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:12.177 23:13:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.177 23:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.177 23:13:03 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:12.177 23:13:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.177 23:13:03 -- common/autotest_common.sh@10 -- # set +x 00:05:12.177 23:13:03 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.370 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.370 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.370 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.628 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.628 23:13:05 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:13.628 23:13:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.628 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.628 23:13:05 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:13.628 23:13:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:13.628 23:13:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.628 23:13:05 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:13.628 23:13:05 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:13.628 23:13:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.628 23:13:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.628 23:13:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.628 23:13:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.628 23:13:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.628 23:13:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.887 23:13:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:13.887 23:13:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:13.887 23:13:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.887 23:13:05 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.887 23:13:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.887 23:13:05 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.887 23:13:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.887 23:13:05 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.887 23:13:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:13.887 23:13:05 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.887 23:13:05 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.887 23:13:05 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:13.887 23:13:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:13.887 23:13:05 -- common/autotest_common.sh@1578 -- # return 0 00:05:13.887 23:13:05 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:13.887 23:13:05 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:13.887 23:13:05 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.887 23:13:05 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.887 23:13:05 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:13.887 23:13:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:13.887 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.887 23:13:05 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.887 23:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.887 23:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.887 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.887 ************************************ 00:05:13.887 START TEST env 00:05:13.887 ************************************ 00:05:13.887 23:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.887 * Looking for test storage... 00:05:13.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.887 23:13:05 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.887 23:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.887 23:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.887 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.888 ************************************ 00:05:13.888 START TEST env_memory 00:05:13.888 ************************************ 00:05:13.888 23:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.888 00:05:13.888 00:05:13.888 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.888 http://cunit.sourceforge.net/ 00:05:13.888 00:05:13.888 00:05:13.888 Suite: memory 00:05:14.146 Test: alloc and free memory map ...[2024-07-26 23:13:05.647867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.147 passed 00:05:14.147 Test: mem map translation ...[2024-07-26 23:13:05.689665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.147 [2024-07-26 23:13:05.689757] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.147 [2024-07-26 23:13:05.689834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.147 [2024-07-26 23:13:05.689862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.147 passed 00:05:14.147 Test: mem map registration ...[2024-07-26 23:13:05.755220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:14.147 [2024-07-26 23:13:05.755293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:14.147 passed 00:05:14.147 Test: mem map adjacent registrations ...passed 00:05:14.147 00:05:14.147 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.147 suites 1 1 n/a 0 0 00:05:14.147 tests 4 4 4 0 0 00:05:14.147 asserts 152 152 152 0 n/a 00:05:14.147 00:05:14.147 Elapsed time = 0.246 seconds 00:05:14.147 00:05:14.147 real 0m0.307s 00:05:14.147 user 0m0.249s 00:05:14.147 sys 0m0.043s 00:05:14.147 23:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.147 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:14.147 ************************************ 00:05:14.147 END TEST env_memory 00:05:14.147 ************************************ 00:05:14.406 23:13:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.406 23:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.406 23:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.406 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:05:14.406 ************************************ 00:05:14.406 START TEST env_vtophys 00:05:14.406 ************************************ 00:05:14.406 23:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.406 EAL: lib.eal log level changed from notice to debug 00:05:14.406 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.406 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.406 EAL: Maximum logical cores by configuration: 128 00:05:14.406 EAL: Detected CPU lcores: 10 00:05:14.406 EAL: Detected NUMA nodes: 1 00:05:14.406 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:14.406 EAL: Detected shared linkage of DPDK 00:05:14.406 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.406 EAL: Selected IOVA mode 'PA' 00:05:14.406 EAL: Probing VFIO support... 00:05:14.406 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.406 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.406 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.406 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.406 EAL: Setting up physically contiguous memory... 00:05:14.406 EAL: Setting maximum number of open files to 524288 00:05:14.406 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.406 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.406 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.406 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.406 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.406 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.406 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.406 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.406 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.406 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.406 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.406 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.406 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.406 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.406 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.406 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.406 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.406 EAL: Hugepages will be freed exactly as allocated. 00:05:14.406 EAL: No shared files mode enabled, IPC is disabled 00:05:14.406 EAL: No shared files mode enabled, IPC is disabled 00:05:14.406 EAL: TSC frequency is ~2490000 KHz 00:05:14.406 EAL: Main lcore 0 is ready (tid=7f665504ea40;cpuset=[0]) 00:05:14.406 EAL: Trying to obtain current memory policy. 00:05:14.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.406 EAL: Restoring previous memory policy: 0 00:05:14.406 EAL: request: mp_malloc_sync 00:05:14.406 EAL: No shared files mode enabled, IPC is disabled 00:05:14.406 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.406 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.664 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.665 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.665 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.665 00:05:14.665 00:05:14.665 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.665 http://cunit.sourceforge.net/ 00:05:14.665 00:05:14.665 00:05:14.665 Suite: components_suite 00:05:15.231 Test: vtophys_malloc_test ...passed 00:05:15.231 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:15.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.231 EAL: Restoring previous memory policy: 4 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was expanded by 4MB 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was shrunk by 4MB 00:05:15.231 EAL: Trying to obtain current memory policy. 00:05:15.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.231 EAL: Restoring previous memory policy: 4 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was expanded by 6MB 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was shrunk by 6MB 00:05:15.231 EAL: Trying to obtain current memory policy. 00:05:15.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.231 EAL: Restoring previous memory policy: 4 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was expanded by 10MB 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was shrunk by 10MB 00:05:15.231 EAL: Trying to obtain current memory policy. 00:05:15.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.231 EAL: Restoring previous memory policy: 4 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was expanded by 18MB 00:05:15.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.231 EAL: request: mp_malloc_sync 00:05:15.231 EAL: No shared files mode enabled, IPC is disabled 00:05:15.231 EAL: Heap on socket 0 was shrunk by 18MB 00:05:15.231 EAL: Trying to obtain current memory policy. 00:05:15.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.232 EAL: Restoring previous memory policy: 4 00:05:15.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.232 EAL: request: mp_malloc_sync 00:05:15.232 EAL: No shared files mode enabled, IPC is disabled 00:05:15.232 EAL: Heap on socket 0 was expanded by 34MB 00:05:15.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.232 EAL: request: mp_malloc_sync 00:05:15.232 EAL: No shared files mode enabled, IPC is disabled 00:05:15.232 EAL: Heap on socket 0 was shrunk by 34MB 00:05:15.490 EAL: Trying to obtain current memory policy. 00:05:15.490 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.490 EAL: Restoring previous memory policy: 4 00:05:15.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.490 EAL: request: mp_malloc_sync 00:05:15.490 EAL: No shared files mode enabled, IPC is disabled 00:05:15.490 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.490 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.490 EAL: request: mp_malloc_sync 00:05:15.490 EAL: No shared files mode enabled, IPC is disabled 00:05:15.490 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.748 EAL: Trying to obtain current memory policy. 00:05:15.748 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.748 EAL: Restoring previous memory policy: 4 00:05:15.748 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.748 EAL: request: mp_malloc_sync 00:05:15.748 EAL: No shared files mode enabled, IPC is disabled 00:05:15.748 EAL: Heap on socket 0 was expanded by 130MB 00:05:16.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.005 EAL: request: mp_malloc_sync 00:05:16.006 EAL: No shared files mode enabled, IPC is disabled 00:05:16.006 EAL: Heap on socket 0 was shrunk by 130MB 00:05:16.263 EAL: Trying to obtain current memory policy. 00:05:16.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.263 EAL: Restoring previous memory policy: 4 00:05:16.263 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.263 EAL: request: mp_malloc_sync 00:05:16.263 EAL: No shared files mode enabled, IPC is disabled 00:05:16.263 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.830 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.830 EAL: request: mp_malloc_sync 00:05:16.830 EAL: No shared files mode enabled, IPC is disabled 00:05:16.830 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.399 EAL: Trying to obtain current memory policy. 00:05:17.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.399 EAL: Restoring previous memory policy: 4 00:05:17.399 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.399 EAL: request: mp_malloc_sync 00:05:17.399 EAL: No shared files mode enabled, IPC is disabled 00:05:17.399 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.777 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.777 EAL: request: mp_malloc_sync 00:05:18.777 EAL: No shared files mode enabled, IPC is disabled 00:05:18.777 EAL: Heap on socket 0 was shrunk by 514MB 00:05:19.713 EAL: Trying to obtain current memory policy. 00:05:19.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.971 EAL: Restoring previous memory policy: 4 00:05:19.971 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.971 EAL: request: mp_malloc_sync 00:05:19.971 EAL: No shared files mode enabled, IPC is disabled 00:05:19.971 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.501 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.501 EAL: request: mp_malloc_sync 00:05:22.501 EAL: No shared files mode enabled, IPC is disabled 00:05:22.501 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.876 passed 00:05:23.876 00:05:23.876 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.876 suites 1 1 n/a 0 0 00:05:23.876 tests 2 2 2 0 0 00:05:23.876 asserts 5285 5285 5285 0 n/a 00:05:23.876 00:05:23.876 Elapsed time = 9.105 seconds 00:05:23.876 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.876 EAL: request: mp_malloc_sync 00:05:23.876 EAL: No shared files mode enabled, IPC is disabled 00:05:23.876 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.876 EAL: No shared files mode enabled, IPC is disabled 00:05:23.876 EAL: No shared files mode enabled, IPC is disabled 00:05:23.876 EAL: No shared files mode enabled, IPC is disabled 00:05:23.876 00:05:23.876 real 0m9.450s 00:05:23.876 user 0m7.987s 00:05:23.876 sys 0m1.296s 00:05:23.876 23:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.876 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:23.876 ************************************ 00:05:23.876 END TEST env_vtophys 00:05:23.876 ************************************ 00:05:23.876 23:13:15 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.876 23:13:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.876 23:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.876 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:23.876 ************************************ 00:05:23.876 START TEST env_pci 00:05:23.876 ************************************ 00:05:23.876 23:13:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.876 00:05:23.876 00:05:23.876 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.876 http://cunit.sourceforge.net/ 00:05:23.876 00:05:23.876 00:05:23.876 Suite: pci 00:05:23.876 Test: pci_hook ...[2024-07-26 23:13:15.524166] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56412 has claimed it 00:05:23.876 passed 00:05:23.876 00:05:23.876 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.876 suites 1 1 n/a 0 0 00:05:23.876 tests 1 1 1 0 0 00:05:23.876 asserts 25 25 25 0 n/a 00:05:23.876 00:05:23.876 Elapsed time = 0.006 seconds 00:05:23.876 EAL: Cannot find device (10000:00:01.0) 00:05:23.876 EAL: Failed to attach device on primary process 00:05:23.876 00:05:23.876 real 0m0.087s 00:05:23.876 user 0m0.042s 00:05:23.876 sys 0m0.045s 00:05:23.877 23:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.877 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:23.877 ************************************ 00:05:23.877 END TEST env_pci 00:05:23.877 ************************************ 00:05:23.877 23:13:15 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:24.134 23:13:15 -- env/env.sh@15 -- # uname 00:05:24.134 23:13:15 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:24.134 23:13:15 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:24.134 23:13:15 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.134 23:13:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:24.134 23:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.134 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:24.134 ************************************ 00:05:24.134 START TEST env_dpdk_post_init 00:05:24.134 ************************************ 00:05:24.134 23:13:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:24.134 EAL: Detected CPU lcores: 10 00:05:24.134 EAL: Detected NUMA nodes: 1 00:05:24.134 EAL: Detected shared linkage of DPDK 00:05:24.134 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.134 EAL: Selected IOVA mode 'PA' 00:05:24.134 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.134 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:24.134 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:24.134 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:24.393 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:24.393 Starting DPDK initialization... 00:05:24.393 Starting SPDK post initialization... 00:05:24.393 SPDK NVMe probe 00:05:24.393 Attaching to 0000:00:06.0 00:05:24.393 Attaching to 0000:00:07.0 00:05:24.393 Attaching to 0000:00:08.0 00:05:24.393 Attaching to 0000:00:09.0 00:05:24.393 Attached to 0000:00:06.0 00:05:24.393 Attached to 0000:00:07.0 00:05:24.393 Attached to 0000:00:09.0 00:05:24.393 Attached to 0000:00:08.0 00:05:24.393 Cleaning up... 00:05:24.393 00:05:24.393 real 0m0.297s 00:05:24.393 user 0m0.092s 00:05:24.393 sys 0m0.110s 00:05:24.393 23:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.393 23:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:24.393 ************************************ 00:05:24.393 END TEST env_dpdk_post_init 00:05:24.393 ************************************ 00:05:24.393 23:13:16 -- env/env.sh@26 -- # uname 00:05:24.393 23:13:16 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:24.393 23:13:16 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.393 23:13:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.393 23:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.393 23:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.393 ************************************ 00:05:24.393 START TEST env_mem_callbacks 00:05:24.393 ************************************ 00:05:24.393 23:13:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:24.393 EAL: Detected CPU lcores: 10 00:05:24.393 EAL: Detected NUMA nodes: 1 00:05:24.393 EAL: Detected shared linkage of DPDK 00:05:24.393 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:24.393 EAL: Selected IOVA mode 'PA' 00:05:24.727 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:24.727 00:05:24.727 00:05:24.727 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.727 http://cunit.sourceforge.net/ 00:05:24.727 00:05:24.727 00:05:24.727 Suite: memory 00:05:24.727 Test: test ... 00:05:24.727 register 0x200000200000 2097152 00:05:24.727 malloc 3145728 00:05:24.727 register 0x200000400000 4194304 00:05:24.727 buf 0x2000004fffc0 len 3145728 PASSED 00:05:24.727 malloc 64 00:05:24.727 buf 0x2000004ffec0 len 64 PASSED 00:05:24.727 malloc 4194304 00:05:24.727 register 0x200000800000 6291456 00:05:24.727 buf 0x2000009fffc0 len 4194304 PASSED 00:05:24.727 free 0x2000004fffc0 3145728 00:05:24.727 free 0x2000004ffec0 64 00:05:24.727 unregister 0x200000400000 4194304 PASSED 00:05:24.727 free 0x2000009fffc0 4194304 00:05:24.727 unregister 0x200000800000 6291456 PASSED 00:05:24.727 malloc 8388608 00:05:24.727 register 0x200000400000 10485760 00:05:24.727 buf 0x2000005fffc0 len 8388608 PASSED 00:05:24.727 free 0x2000005fffc0 8388608 00:05:24.727 unregister 0x200000400000 10485760 PASSED 00:05:24.727 passed 00:05:24.727 00:05:24.727 Run Summary: Type Total Ran Passed Failed Inactive 00:05:24.727 suites 1 1 n/a 0 0 00:05:24.727 tests 1 1 1 0 0 00:05:24.727 asserts 15 15 15 0 n/a 00:05:24.727 00:05:24.727 Elapsed time = 0.082 seconds 00:05:24.727 00:05:24.727 real 0m0.287s 00:05:24.727 user 0m0.119s 00:05:24.727 sys 0m0.067s 00:05:24.727 23:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.727 23:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.727 ************************************ 00:05:24.727 END TEST env_mem_callbacks 00:05:24.727 ************************************ 00:05:24.727 00:05:24.727 real 0m10.915s 00:05:24.727 user 0m8.649s 00:05:24.727 sys 0m1.872s 00:05:24.727 23:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.727 23:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.727 ************************************ 00:05:24.727 END TEST env 00:05:24.728 ************************************ 00:05:24.728 23:13:16 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.728 23:13:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.728 23:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.728 23:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.987 ************************************ 00:05:24.987 START TEST rpc 00:05:24.987 ************************************ 00:05:24.987 23:13:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.987 * Looking for test storage... 00:05:24.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.987 23:13:16 -- rpc/rpc.sh@65 -- # spdk_pid=56530 00:05:24.987 23:13:16 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:24.987 23:13:16 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.987 23:13:16 -- rpc/rpc.sh@67 -- # waitforlisten 56530 00:05:24.987 23:13:16 -- common/autotest_common.sh@819 -- # '[' -z 56530 ']' 00:05:24.987 23:13:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.987 23:13:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.987 23:13:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.987 23:13:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.987 23:13:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.987 [2024-07-26 23:13:16.681404] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:24.987 [2024-07-26 23:13:16.681543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56530 ] 00:05:25.246 [2024-07-26 23:13:16.854240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.505 [2024-07-26 23:13:17.110311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.505 [2024-07-26 23:13:17.110539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.505 [2024-07-26 23:13:17.110559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56530' to capture a snapshot of events at runtime. 00:05:25.505 [2024-07-26 23:13:17.110571] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56530 for offline analysis/debug. 00:05:25.505 [2024-07-26 23:13:17.110622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.440 23:13:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.440 23:13:18 -- common/autotest_common.sh@852 -- # return 0 00:05:26.440 23:13:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.440 23:13:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:26.440 23:13:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.440 23:13:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.440 23:13:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.440 23:13:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.440 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.440 ************************************ 00:05:26.440 START TEST rpc_integrity 00:05:26.440 ************************************ 00:05:26.440 23:13:18 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:26.440 23:13:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.440 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.440 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.440 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.440 23:13:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.440 23:13:18 -- rpc/rpc.sh@13 -- # jq length 00:05:26.699 23:13:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.699 23:13:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.700 23:13:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.700 23:13:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.700 23:13:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.700 { 00:05:26.700 "name": "Malloc0", 00:05:26.700 "aliases": [ 00:05:26.700 "5b1a0e6e-1c14-422c-9028-5268b84a6358" 00:05:26.700 ], 00:05:26.700 "product_name": "Malloc disk", 00:05:26.700 "block_size": 512, 00:05:26.700 "num_blocks": 16384, 00:05:26.700 "uuid": "5b1a0e6e-1c14-422c-9028-5268b84a6358", 00:05:26.700 "assigned_rate_limits": { 00:05:26.700 "rw_ios_per_sec": 0, 00:05:26.700 "rw_mbytes_per_sec": 0, 00:05:26.700 "r_mbytes_per_sec": 0, 00:05:26.700 "w_mbytes_per_sec": 0 00:05:26.700 }, 00:05:26.700 "claimed": false, 00:05:26.700 "zoned": false, 00:05:26.700 "supported_io_types": { 00:05:26.700 "read": true, 00:05:26.700 "write": true, 00:05:26.700 "unmap": true, 00:05:26.700 "write_zeroes": true, 00:05:26.700 "flush": true, 00:05:26.700 "reset": true, 00:05:26.700 "compare": false, 00:05:26.700 "compare_and_write": false, 00:05:26.700 "abort": true, 00:05:26.700 "nvme_admin": false, 00:05:26.700 "nvme_io": false 00:05:26.700 }, 00:05:26.700 "memory_domains": [ 00:05:26.700 { 00:05:26.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.700 "dma_device_type": 2 00:05:26.700 } 00:05:26.700 ], 00:05:26.700 "driver_specific": {} 00:05:26.700 } 00:05:26.700 ]' 00:05:26.700 23:13:18 -- rpc/rpc.sh@17 -- # jq length 00:05:26.700 23:13:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.700 23:13:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 [2024-07-26 23:13:18.321748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.700 [2024-07-26 23:13:18.321834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.700 [2024-07-26 23:13:18.321863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:26.700 [2024-07-26 23:13:18.321879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.700 [2024-07-26 23:13:18.324497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.700 [2024-07-26 23:13:18.324542] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.700 Passthru0 00:05:26.700 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.700 23:13:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.700 23:13:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.700 { 00:05:26.700 "name": "Malloc0", 00:05:26.700 "aliases": [ 00:05:26.700 "5b1a0e6e-1c14-422c-9028-5268b84a6358" 00:05:26.700 ], 00:05:26.700 "product_name": "Malloc disk", 00:05:26.700 "block_size": 512, 00:05:26.700 "num_blocks": 16384, 00:05:26.700 "uuid": "5b1a0e6e-1c14-422c-9028-5268b84a6358", 00:05:26.700 "assigned_rate_limits": { 00:05:26.700 "rw_ios_per_sec": 0, 00:05:26.700 "rw_mbytes_per_sec": 0, 00:05:26.700 "r_mbytes_per_sec": 0, 00:05:26.700 "w_mbytes_per_sec": 0 00:05:26.700 }, 00:05:26.700 "claimed": true, 00:05:26.700 "claim_type": "exclusive_write", 00:05:26.700 "zoned": false, 00:05:26.700 "supported_io_types": { 00:05:26.700 "read": true, 00:05:26.700 "write": true, 00:05:26.700 "unmap": true, 00:05:26.700 "write_zeroes": true, 00:05:26.700 "flush": true, 00:05:26.700 "reset": true, 00:05:26.700 "compare": false, 00:05:26.700 "compare_and_write": false, 00:05:26.700 "abort": true, 00:05:26.700 "nvme_admin": false, 00:05:26.700 "nvme_io": false 00:05:26.700 }, 00:05:26.700 "memory_domains": [ 00:05:26.700 { 00:05:26.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.700 "dma_device_type": 2 00:05:26.700 } 00:05:26.700 ], 00:05:26.700 "driver_specific": {} 00:05:26.700 }, 00:05:26.700 { 00:05:26.700 "name": "Passthru0", 00:05:26.700 "aliases": [ 00:05:26.700 "1e359aef-66d4-5995-9e57-75cb3e2cba6b" 00:05:26.700 ], 00:05:26.700 "product_name": "passthru", 00:05:26.700 "block_size": 512, 00:05:26.700 "num_blocks": 16384, 00:05:26.700 "uuid": "1e359aef-66d4-5995-9e57-75cb3e2cba6b", 00:05:26.700 "assigned_rate_limits": { 00:05:26.700 "rw_ios_per_sec": 0, 00:05:26.700 "rw_mbytes_per_sec": 0, 00:05:26.700 "r_mbytes_per_sec": 0, 00:05:26.700 "w_mbytes_per_sec": 0 00:05:26.700 }, 00:05:26.700 "claimed": false, 00:05:26.700 "zoned": false, 00:05:26.700 "supported_io_types": { 00:05:26.700 "read": true, 00:05:26.700 "write": true, 00:05:26.700 "unmap": true, 00:05:26.700 "write_zeroes": true, 00:05:26.700 "flush": true, 00:05:26.700 "reset": true, 00:05:26.700 "compare": false, 00:05:26.700 "compare_and_write": false, 00:05:26.700 "abort": true, 00:05:26.700 "nvme_admin": false, 00:05:26.700 "nvme_io": false 00:05:26.700 }, 00:05:26.700 "memory_domains": [ 00:05:26.700 { 00:05:26.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.700 "dma_device_type": 2 00:05:26.700 } 00:05:26.700 ], 00:05:26.700 "driver_specific": { 00:05:26.700 "passthru": { 00:05:26.700 "name": "Passthru0", 00:05:26.700 "base_bdev_name": "Malloc0" 00:05:26.700 } 00:05:26.700 } 00:05:26.700 } 00:05:26.700 ]' 00:05:26.700 23:13:18 -- rpc/rpc.sh@21 -- # jq length 00:05:26.700 23:13:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.700 23:13:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.700 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.700 23:13:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.700 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.700 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.959 23:13:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.959 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.959 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.959 23:13:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.959 23:13:18 -- rpc/rpc.sh@26 -- # jq length 00:05:26.959 23:13:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.959 00:05:26.959 real 0m0.361s 00:05:26.959 user 0m0.189s 00:05:26.959 sys 0m0.065s 00:05:26.959 23:13:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.959 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 ************************************ 00:05:26.959 END TEST rpc_integrity 00:05:26.959 ************************************ 00:05:26.959 23:13:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.959 23:13:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.959 23:13:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.959 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 ************************************ 00:05:26.959 START TEST rpc_plugins 00:05:26.959 ************************************ 00:05:26.959 23:13:18 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:26.959 23:13:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.959 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.959 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.959 23:13:18 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.959 23:13:18 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.959 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.959 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.959 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.959 23:13:18 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.959 { 00:05:26.959 "name": "Malloc1", 00:05:26.959 "aliases": [ 00:05:26.959 "4de602b4-d19b-422d-9eb0-b3dbabb3c7dc" 00:05:26.959 ], 00:05:26.959 "product_name": "Malloc disk", 00:05:26.959 "block_size": 4096, 00:05:26.959 "num_blocks": 256, 00:05:26.959 "uuid": "4de602b4-d19b-422d-9eb0-b3dbabb3c7dc", 00:05:26.959 "assigned_rate_limits": { 00:05:26.959 "rw_ios_per_sec": 0, 00:05:26.959 "rw_mbytes_per_sec": 0, 00:05:26.959 "r_mbytes_per_sec": 0, 00:05:26.959 "w_mbytes_per_sec": 0 00:05:26.959 }, 00:05:26.959 "claimed": false, 00:05:26.959 "zoned": false, 00:05:26.959 "supported_io_types": { 00:05:26.959 "read": true, 00:05:26.959 "write": true, 00:05:26.959 "unmap": true, 00:05:26.959 "write_zeroes": true, 00:05:26.959 "flush": true, 00:05:26.959 "reset": true, 00:05:26.959 "compare": false, 00:05:26.959 "compare_and_write": false, 00:05:26.959 "abort": true, 00:05:26.959 "nvme_admin": false, 00:05:26.959 "nvme_io": false 00:05:26.959 }, 00:05:26.959 "memory_domains": [ 00:05:26.959 { 00:05:26.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.959 "dma_device_type": 2 00:05:26.959 } 00:05:26.959 ], 00:05:26.959 "driver_specific": {} 00:05:26.959 } 00:05:26.959 ]' 00:05:26.959 23:13:18 -- rpc/rpc.sh@32 -- # jq length 00:05:26.959 23:13:18 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.959 23:13:18 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.960 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.960 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.960 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.960 23:13:18 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.960 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.960 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.960 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.960 23:13:18 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.960 23:13:18 -- rpc/rpc.sh@36 -- # jq length 00:05:27.219 23:13:18 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.219 00:05:27.219 real 0m0.163s 00:05:27.219 user 0m0.089s 00:05:27.219 sys 0m0.032s 00:05:27.219 23:13:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.219 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:27.219 ************************************ 00:05:27.219 END TEST rpc_plugins 00:05:27.219 ************************************ 00:05:27.219 23:13:18 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.219 23:13:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.219 23:13:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.219 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:27.219 ************************************ 00:05:27.219 START TEST rpc_trace_cmd_test 00:05:27.219 ************************************ 00:05:27.219 23:13:18 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:27.219 23:13:18 -- rpc/rpc.sh@40 -- # local info 00:05:27.219 23:13:18 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.219 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.219 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:27.219 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.219 23:13:18 -- rpc/rpc.sh@42 -- # info='{ 00:05:27.219 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56530", 00:05:27.219 "tpoint_group_mask": "0x8", 00:05:27.219 "iscsi_conn": { 00:05:27.219 "mask": "0x2", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "scsi": { 00:05:27.219 "mask": "0x4", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "bdev": { 00:05:27.219 "mask": "0x8", 00:05:27.219 "tpoint_mask": "0xffffffffffffffff" 00:05:27.219 }, 00:05:27.219 "nvmf_rdma": { 00:05:27.219 "mask": "0x10", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "nvmf_tcp": { 00:05:27.219 "mask": "0x20", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "ftl": { 00:05:27.219 "mask": "0x40", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "blobfs": { 00:05:27.219 "mask": "0x80", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "dsa": { 00:05:27.219 "mask": "0x200", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "thread": { 00:05:27.219 "mask": "0x400", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "nvme_pcie": { 00:05:27.219 "mask": "0x800", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "iaa": { 00:05:27.219 "mask": "0x1000", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "nvme_tcp": { 00:05:27.219 "mask": "0x2000", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 }, 00:05:27.219 "bdev_nvme": { 00:05:27.219 "mask": "0x4000", 00:05:27.219 "tpoint_mask": "0x0" 00:05:27.219 } 00:05:27.219 }' 00:05:27.219 23:13:18 -- rpc/rpc.sh@43 -- # jq length 00:05:27.219 23:13:18 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:27.219 23:13:18 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:27.219 23:13:18 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:27.219 23:13:18 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.478 23:13:18 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.478 23:13:18 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.478 23:13:19 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.478 23:13:19 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.478 23:13:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.478 00:05:27.478 real 0m0.238s 00:05:27.478 user 0m0.192s 00:05:27.478 sys 0m0.037s 00:05:27.478 23:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.478 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.478 ************************************ 00:05:27.478 END TEST rpc_trace_cmd_test 00:05:27.478 ************************************ 00:05:27.478 23:13:19 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.478 23:13:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.478 23:13:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.478 23:13:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.478 23:13:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.478 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.478 ************************************ 00:05:27.478 START TEST rpc_daemon_integrity 00:05:27.478 ************************************ 00:05:27.478 23:13:19 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:27.478 23:13:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.478 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.478 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.478 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.478 23:13:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.478 23:13:19 -- rpc/rpc.sh@13 -- # jq length 00:05:27.478 23:13:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.478 23:13:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.478 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.478 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.478 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.478 23:13:19 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.478 23:13:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.478 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.478 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.737 23:13:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.737 { 00:05:27.737 "name": "Malloc2", 00:05:27.737 "aliases": [ 00:05:27.737 "d1093306-5791-4c8a-a064-36a5b007a07b" 00:05:27.737 ], 00:05:27.737 "product_name": "Malloc disk", 00:05:27.737 "block_size": 512, 00:05:27.737 "num_blocks": 16384, 00:05:27.737 "uuid": "d1093306-5791-4c8a-a064-36a5b007a07b", 00:05:27.737 "assigned_rate_limits": { 00:05:27.737 "rw_ios_per_sec": 0, 00:05:27.737 "rw_mbytes_per_sec": 0, 00:05:27.737 "r_mbytes_per_sec": 0, 00:05:27.737 "w_mbytes_per_sec": 0 00:05:27.737 }, 00:05:27.737 "claimed": false, 00:05:27.737 "zoned": false, 00:05:27.737 "supported_io_types": { 00:05:27.737 "read": true, 00:05:27.737 "write": true, 00:05:27.737 "unmap": true, 00:05:27.737 "write_zeroes": true, 00:05:27.737 "flush": true, 00:05:27.737 "reset": true, 00:05:27.737 "compare": false, 00:05:27.737 "compare_and_write": false, 00:05:27.737 "abort": true, 00:05:27.737 "nvme_admin": false, 00:05:27.737 "nvme_io": false 00:05:27.737 }, 00:05:27.737 "memory_domains": [ 00:05:27.737 { 00:05:27.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.737 "dma_device_type": 2 00:05:27.737 } 00:05:27.737 ], 00:05:27.737 "driver_specific": {} 00:05:27.737 } 00:05:27.737 ]' 00:05:27.737 23:13:19 -- rpc/rpc.sh@17 -- # jq length 00:05:27.737 23:13:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.737 23:13:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.737 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.737 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 [2024-07-26 23:13:19.282281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.737 [2024-07-26 23:13:19.282344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.737 [2024-07-26 23:13:19.282366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:27.737 [2024-07-26 23:13:19.282382] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.737 [2024-07-26 23:13:19.284977] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.737 [2024-07-26 23:13:19.285032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.737 Passthru0 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.737 23:13:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.737 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.737 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.737 23:13:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.737 { 00:05:27.737 "name": "Malloc2", 00:05:27.737 "aliases": [ 00:05:27.737 "d1093306-5791-4c8a-a064-36a5b007a07b" 00:05:27.737 ], 00:05:27.737 "product_name": "Malloc disk", 00:05:27.737 "block_size": 512, 00:05:27.737 "num_blocks": 16384, 00:05:27.737 "uuid": "d1093306-5791-4c8a-a064-36a5b007a07b", 00:05:27.737 "assigned_rate_limits": { 00:05:27.737 "rw_ios_per_sec": 0, 00:05:27.737 "rw_mbytes_per_sec": 0, 00:05:27.737 "r_mbytes_per_sec": 0, 00:05:27.737 "w_mbytes_per_sec": 0 00:05:27.737 }, 00:05:27.737 "claimed": true, 00:05:27.737 "claim_type": "exclusive_write", 00:05:27.737 "zoned": false, 00:05:27.737 "supported_io_types": { 00:05:27.737 "read": true, 00:05:27.737 "write": true, 00:05:27.737 "unmap": true, 00:05:27.737 "write_zeroes": true, 00:05:27.737 "flush": true, 00:05:27.737 "reset": true, 00:05:27.737 "compare": false, 00:05:27.737 "compare_and_write": false, 00:05:27.737 "abort": true, 00:05:27.737 "nvme_admin": false, 00:05:27.737 "nvme_io": false 00:05:27.737 }, 00:05:27.737 "memory_domains": [ 00:05:27.737 { 00:05:27.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.737 "dma_device_type": 2 00:05:27.737 } 00:05:27.737 ], 00:05:27.737 "driver_specific": {} 00:05:27.737 }, 00:05:27.737 { 00:05:27.737 "name": "Passthru0", 00:05:27.737 "aliases": [ 00:05:27.737 "20f71192-db3a-58fc-bf65-a7bf83275f94" 00:05:27.737 ], 00:05:27.737 "product_name": "passthru", 00:05:27.737 "block_size": 512, 00:05:27.737 "num_blocks": 16384, 00:05:27.737 "uuid": "20f71192-db3a-58fc-bf65-a7bf83275f94", 00:05:27.737 "assigned_rate_limits": { 00:05:27.737 "rw_ios_per_sec": 0, 00:05:27.737 "rw_mbytes_per_sec": 0, 00:05:27.737 "r_mbytes_per_sec": 0, 00:05:27.737 "w_mbytes_per_sec": 0 00:05:27.737 }, 00:05:27.737 "claimed": false, 00:05:27.737 "zoned": false, 00:05:27.737 "supported_io_types": { 00:05:27.737 "read": true, 00:05:27.737 "write": true, 00:05:27.737 "unmap": true, 00:05:27.737 "write_zeroes": true, 00:05:27.737 "flush": true, 00:05:27.737 "reset": true, 00:05:27.737 "compare": false, 00:05:27.737 "compare_and_write": false, 00:05:27.737 "abort": true, 00:05:27.737 "nvme_admin": false, 00:05:27.737 "nvme_io": false 00:05:27.737 }, 00:05:27.737 "memory_domains": [ 00:05:27.737 { 00:05:27.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.737 "dma_device_type": 2 00:05:27.737 } 00:05:27.737 ], 00:05:27.737 "driver_specific": { 00:05:27.737 "passthru": { 00:05:27.737 "name": "Passthru0", 00:05:27.737 "base_bdev_name": "Malloc2" 00:05:27.737 } 00:05:27.737 } 00:05:27.737 } 00:05:27.737 ]' 00:05:27.737 23:13:19 -- rpc/rpc.sh@21 -- # jq length 00:05:27.737 23:13:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.737 23:13:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.737 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.737 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.737 23:13:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.737 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.737 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.737 23:13:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.737 23:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.737 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.737 23:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.738 23:13:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.738 23:13:19 -- rpc/rpc.sh@26 -- # jq length 00:05:27.738 23:13:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.738 00:05:27.738 real 0m0.344s 00:05:27.738 user 0m0.197s 00:05:27.738 sys 0m0.055s 00:05:27.738 23:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.738 23:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.738 ************************************ 00:05:27.738 END TEST rpc_daemon_integrity 00:05:27.738 ************************************ 00:05:27.997 23:13:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.997 23:13:19 -- rpc/rpc.sh@84 -- # killprocess 56530 00:05:27.997 23:13:19 -- common/autotest_common.sh@926 -- # '[' -z 56530 ']' 00:05:27.997 23:13:19 -- common/autotest_common.sh@930 -- # kill -0 56530 00:05:27.997 23:13:19 -- common/autotest_common.sh@931 -- # uname 00:05:27.997 23:13:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.997 23:13:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56530 00:05:27.997 23:13:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.997 23:13:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.997 23:13:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56530' 00:05:27.997 killing process with pid 56530 00:05:27.997 23:13:19 -- common/autotest_common.sh@945 -- # kill 56530 00:05:27.997 23:13:19 -- common/autotest_common.sh@950 -- # wait 56530 00:05:30.531 00:05:30.531 real 0m5.660s 00:05:30.531 user 0m6.140s 00:05:30.531 sys 0m1.113s 00:05:30.531 23:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.531 ************************************ 00:05:30.531 END TEST rpc 00:05:30.531 ************************************ 00:05:30.531 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.531 23:13:22 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.531 23:13:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.531 23:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.531 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.531 ************************************ 00:05:30.531 START TEST rpc_client 00:05:30.531 ************************************ 00:05:30.531 23:13:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:30.791 * Looking for test storage... 00:05:30.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:30.791 23:13:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:30.791 OK 00:05:30.791 23:13:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:30.791 00:05:30.791 real 0m0.203s 00:05:30.791 user 0m0.077s 00:05:30.791 sys 0m0.139s 00:05:30.791 23:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.791 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.791 ************************************ 00:05:30.791 END TEST rpc_client 00:05:30.791 ************************************ 00:05:30.791 23:13:22 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:30.791 23:13:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.791 23:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.791 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.791 ************************************ 00:05:30.791 START TEST json_config 00:05:30.791 ************************************ 00:05:30.791 23:13:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:30.791 23:13:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:30.791 23:13:22 -- nvmf/common.sh@7 -- # uname -s 00:05:30.791 23:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.791 23:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.791 23:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.791 23:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.791 23:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.791 23:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.791 23:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.791 23:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.791 23:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.050 23:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.051 23:13:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8183e38f-c15b-4fba-ab96-70becb9a62cd 00:05:31.051 23:13:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8183e38f-c15b-4fba-ab96-70becb9a62cd 00:05:31.051 23:13:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.051 23:13:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.051 23:13:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.051 23:13:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.051 23:13:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.051 23:13:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.051 23:13:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.051 23:13:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@5 -- # export PATH 00:05:31.051 23:13:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- nvmf/common.sh@46 -- # : 0 00:05:31.051 23:13:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:31.051 23:13:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:31.051 23:13:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.051 23:13:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.051 23:13:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:31.051 23:13:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:31.051 23:13:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:31.051 23:13:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:31.051 23:13:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.051 WARNING: No tests are enabled so not running JSON configuration tests 00:05:31.051 23:13:22 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:31.051 23:13:22 -- json_config/json_config.sh@27 -- # exit 0 00:05:31.051 00:05:31.051 real 0m0.104s 00:05:31.051 user 0m0.045s 00:05:31.051 sys 0m0.060s 00:05:31.051 23:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.051 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.051 ************************************ 00:05:31.051 END TEST json_config 00:05:31.051 ************************************ 00:05:31.051 23:13:22 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.051 23:13:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.051 23:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.051 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.051 ************************************ 00:05:31.051 START TEST json_config_extra_key 00:05:31.051 ************************************ 00:05:31.051 23:13:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.051 23:13:22 -- nvmf/common.sh@7 -- # uname -s 00:05:31.051 23:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.051 23:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.051 23:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.051 23:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.051 23:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.051 23:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.051 23:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.051 23:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.051 23:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.051 23:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.051 23:13:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8183e38f-c15b-4fba-ab96-70becb9a62cd 00:05:31.051 23:13:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8183e38f-c15b-4fba-ab96-70becb9a62cd 00:05:31.051 23:13:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.051 23:13:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.051 23:13:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.051 23:13:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.051 23:13:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.051 23:13:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.051 23:13:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.051 23:13:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- paths/export.sh@5 -- # export PATH 00:05:31.051 23:13:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.051 23:13:22 -- nvmf/common.sh@46 -- # : 0 00:05:31.051 23:13:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:31.051 23:13:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:31.051 23:13:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.051 23:13:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.051 23:13:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:31.051 23:13:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.051 INFO: launching applications... 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56835 00:05:31.051 23:13:22 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:31.051 Waiting for target to run... 00:05:31.052 23:13:22 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:31.052 23:13:22 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56835 /var/tmp/spdk_tgt.sock 00:05:31.052 23:13:22 -- common/autotest_common.sh@819 -- # '[' -z 56835 ']' 00:05:31.052 23:13:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.052 23:13:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.052 23:13:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.052 23:13:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.052 23:13:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.310 [2024-07-26 23:13:22.852141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:31.310 [2024-07-26 23:13:22.852262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56835 ] 00:05:31.568 [2024-07-26 23:13:23.247126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.826 [2024-07-26 23:13:23.484937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.826 [2024-07-26 23:13:23.485143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.761 23:13:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.761 00:05:32.761 23:13:24 -- common/autotest_common.sh@852 -- # return 0 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:32.761 INFO: shutting down applications... 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56835 ]] 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56835 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:32.761 23:13:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:33.329 23:13:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:33.329 23:13:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:33.329 23:13:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:33.329 23:13:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:33.587 23:13:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:33.587 23:13:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:33.587 23:13:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:33.587 23:13:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:34.153 23:13:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:34.153 23:13:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.153 23:13:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:34.153 23:13:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:34.720 23:13:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:34.720 23:13:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.720 23:13:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:34.720 23:13:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:35.287 23:13:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:35.287 23:13:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.287 23:13:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:35.287 23:13:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56835 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:35.856 SPDK target shutdown done 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:35.856 Success 00:05:35.856 23:13:27 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:35.856 00:05:35.856 real 0m4.703s 00:05:35.856 user 0m4.424s 00:05:35.856 sys 0m0.611s 00:05:35.856 23:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.856 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.856 ************************************ 00:05:35.856 END TEST json_config_extra_key 00:05:35.856 ************************************ 00:05:35.856 23:13:27 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.856 23:13:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.856 23:13:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.856 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.856 ************************************ 00:05:35.856 START TEST alias_rpc 00:05:35.856 ************************************ 00:05:35.856 23:13:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.856 * Looking for test storage... 00:05:35.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:35.856 23:13:27 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.856 23:13:27 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56934 00:05:35.856 23:13:27 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56934 00:05:35.856 23:13:27 -- common/autotest_common.sh@819 -- # '[' -z 56934 ']' 00:05:35.856 23:13:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.856 23:13:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.856 23:13:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.856 23:13:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.856 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:35.856 23:13:27 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.115 [2024-07-26 23:13:27.628302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:36.115 [2024-07-26 23:13:27.628423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56934 ] 00:05:36.115 [2024-07-26 23:13:27.802939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.375 [2024-07-26 23:13:28.074952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.375 [2024-07-26 23:13:28.075169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.292 23:13:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.292 23:13:29 -- common/autotest_common.sh@852 -- # return 0 00:05:38.292 23:13:29 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:38.292 23:13:29 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56934 00:05:38.292 23:13:29 -- common/autotest_common.sh@926 -- # '[' -z 56934 ']' 00:05:38.292 23:13:29 -- common/autotest_common.sh@930 -- # kill -0 56934 00:05:38.292 23:13:29 -- common/autotest_common.sh@931 -- # uname 00:05:38.292 23:13:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:38.292 23:13:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56934 00:05:38.292 23:13:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:38.292 23:13:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:38.292 23:13:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56934' 00:05:38.292 killing process with pid 56934 00:05:38.292 23:13:29 -- common/autotest_common.sh@945 -- # kill 56934 00:05:38.292 23:13:29 -- common/autotest_common.sh@950 -- # wait 56934 00:05:40.847 00:05:40.847 real 0m5.122s 00:05:40.847 user 0m5.138s 00:05:40.847 sys 0m0.773s 00:05:40.847 23:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.847 ************************************ 00:05:40.847 END TEST alias_rpc 00:05:40.847 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.847 ************************************ 00:05:40.847 23:13:32 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:40.847 23:13:32 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:40.847 23:13:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.847 23:13:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.847 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.847 ************************************ 00:05:40.847 START TEST spdkcli_tcp 00:05:40.847 ************************************ 00:05:40.847 23:13:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.107 * Looking for test storage... 00:05:41.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:41.107 23:13:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:41.107 23:13:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.107 23:13:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:41.107 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57053 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@27 -- # waitforlisten 57053 00:05:41.107 23:13:32 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.107 23:13:32 -- common/autotest_common.sh@819 -- # '[' -z 57053 ']' 00:05:41.107 23:13:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.107 23:13:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.107 23:13:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.107 23:13:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.107 23:13:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.107 [2024-07-26 23:13:32.828662] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:41.107 [2024-07-26 23:13:32.828774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57053 ] 00:05:41.366 [2024-07-26 23:13:32.999210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.626 [2024-07-26 23:13:33.255024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.626 [2024-07-26 23:13:33.255741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.626 [2024-07-26 23:13:33.255860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.531 23:13:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.531 23:13:34 -- common/autotest_common.sh@852 -- # return 0 00:05:43.531 23:13:34 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.531 23:13:34 -- spdkcli/tcp.sh@31 -- # socat_pid=57085 00:05:43.531 23:13:34 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.531 [ 00:05:43.531 "bdev_malloc_delete", 00:05:43.531 "bdev_malloc_create", 00:05:43.531 "bdev_null_resize", 00:05:43.531 "bdev_null_delete", 00:05:43.531 "bdev_null_create", 00:05:43.531 "bdev_nvme_cuse_unregister", 00:05:43.531 "bdev_nvme_cuse_register", 00:05:43.531 "bdev_opal_new_user", 00:05:43.531 "bdev_opal_set_lock_state", 00:05:43.531 "bdev_opal_delete", 00:05:43.531 "bdev_opal_get_info", 00:05:43.531 "bdev_opal_create", 00:05:43.531 "bdev_nvme_opal_revert", 00:05:43.531 "bdev_nvme_opal_init", 00:05:43.531 "bdev_nvme_send_cmd", 00:05:43.531 "bdev_nvme_get_path_iostat", 00:05:43.531 "bdev_nvme_get_mdns_discovery_info", 00:05:43.531 "bdev_nvme_stop_mdns_discovery", 00:05:43.531 "bdev_nvme_start_mdns_discovery", 00:05:43.531 "bdev_nvme_set_multipath_policy", 00:05:43.531 "bdev_nvme_set_preferred_path", 00:05:43.531 "bdev_nvme_get_io_paths", 00:05:43.531 "bdev_nvme_remove_error_injection", 00:05:43.531 "bdev_nvme_add_error_injection", 00:05:43.531 "bdev_nvme_get_discovery_info", 00:05:43.531 "bdev_nvme_stop_discovery", 00:05:43.531 "bdev_nvme_start_discovery", 00:05:43.531 "bdev_nvme_get_controller_health_info", 00:05:43.531 "bdev_nvme_disable_controller", 00:05:43.531 "bdev_nvme_enable_controller", 00:05:43.531 "bdev_nvme_reset_controller", 00:05:43.531 "bdev_nvme_get_transport_statistics", 00:05:43.531 "bdev_nvme_apply_firmware", 00:05:43.531 "bdev_nvme_detach_controller", 00:05:43.531 "bdev_nvme_get_controllers", 00:05:43.531 "bdev_nvme_attach_controller", 00:05:43.531 "bdev_nvme_set_hotplug", 00:05:43.531 "bdev_nvme_set_options", 00:05:43.531 "bdev_passthru_delete", 00:05:43.531 "bdev_passthru_create", 00:05:43.531 "bdev_lvol_grow_lvstore", 00:05:43.531 "bdev_lvol_get_lvols", 00:05:43.531 "bdev_lvol_get_lvstores", 00:05:43.531 "bdev_lvol_delete", 00:05:43.531 "bdev_lvol_set_read_only", 00:05:43.531 "bdev_lvol_resize", 00:05:43.531 "bdev_lvol_decouple_parent", 00:05:43.531 "bdev_lvol_inflate", 00:05:43.531 "bdev_lvol_rename", 00:05:43.531 "bdev_lvol_clone_bdev", 00:05:43.531 "bdev_lvol_clone", 00:05:43.531 "bdev_lvol_snapshot", 00:05:43.531 "bdev_lvol_create", 00:05:43.531 "bdev_lvol_delete_lvstore", 00:05:43.531 "bdev_lvol_rename_lvstore", 00:05:43.531 "bdev_lvol_create_lvstore", 00:05:43.531 "bdev_raid_set_options", 00:05:43.531 "bdev_raid_remove_base_bdev", 00:05:43.531 "bdev_raid_add_base_bdev", 00:05:43.531 "bdev_raid_delete", 00:05:43.531 "bdev_raid_create", 00:05:43.531 "bdev_raid_get_bdevs", 00:05:43.531 "bdev_error_inject_error", 00:05:43.532 "bdev_error_delete", 00:05:43.532 "bdev_error_create", 00:05:43.532 "bdev_split_delete", 00:05:43.532 "bdev_split_create", 00:05:43.532 "bdev_delay_delete", 00:05:43.532 "bdev_delay_create", 00:05:43.532 "bdev_delay_update_latency", 00:05:43.532 "bdev_zone_block_delete", 00:05:43.532 "bdev_zone_block_create", 00:05:43.532 "blobfs_create", 00:05:43.532 "blobfs_detect", 00:05:43.532 "blobfs_set_cache_size", 00:05:43.532 "bdev_xnvme_delete", 00:05:43.532 "bdev_xnvme_create", 00:05:43.532 "bdev_aio_delete", 00:05:43.532 "bdev_aio_rescan", 00:05:43.532 "bdev_aio_create", 00:05:43.532 "bdev_ftl_set_property", 00:05:43.532 "bdev_ftl_get_properties", 00:05:43.532 "bdev_ftl_get_stats", 00:05:43.532 "bdev_ftl_unmap", 00:05:43.532 "bdev_ftl_unload", 00:05:43.532 "bdev_ftl_delete", 00:05:43.532 "bdev_ftl_load", 00:05:43.532 "bdev_ftl_create", 00:05:43.532 "bdev_virtio_attach_controller", 00:05:43.532 "bdev_virtio_scsi_get_devices", 00:05:43.532 "bdev_virtio_detach_controller", 00:05:43.532 "bdev_virtio_blk_set_hotplug", 00:05:43.532 "bdev_iscsi_delete", 00:05:43.532 "bdev_iscsi_create", 00:05:43.532 "bdev_iscsi_set_options", 00:05:43.532 "accel_error_inject_error", 00:05:43.532 "ioat_scan_accel_module", 00:05:43.532 "dsa_scan_accel_module", 00:05:43.532 "iaa_scan_accel_module", 00:05:43.532 "iscsi_set_options", 00:05:43.532 "iscsi_get_auth_groups", 00:05:43.532 "iscsi_auth_group_remove_secret", 00:05:43.532 "iscsi_auth_group_add_secret", 00:05:43.532 "iscsi_delete_auth_group", 00:05:43.532 "iscsi_create_auth_group", 00:05:43.532 "iscsi_set_discovery_auth", 00:05:43.532 "iscsi_get_options", 00:05:43.532 "iscsi_target_node_request_logout", 00:05:43.532 "iscsi_target_node_set_redirect", 00:05:43.532 "iscsi_target_node_set_auth", 00:05:43.532 "iscsi_target_node_add_lun", 00:05:43.532 "iscsi_get_connections", 00:05:43.532 "iscsi_portal_group_set_auth", 00:05:43.532 "iscsi_start_portal_group", 00:05:43.532 "iscsi_delete_portal_group", 00:05:43.532 "iscsi_create_portal_group", 00:05:43.532 "iscsi_get_portal_groups", 00:05:43.532 "iscsi_delete_target_node", 00:05:43.532 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.532 "iscsi_target_node_add_pg_ig_maps", 00:05:43.532 "iscsi_create_target_node", 00:05:43.532 "iscsi_get_target_nodes", 00:05:43.532 "iscsi_delete_initiator_group", 00:05:43.532 "iscsi_initiator_group_remove_initiators", 00:05:43.532 "iscsi_initiator_group_add_initiators", 00:05:43.532 "iscsi_create_initiator_group", 00:05:43.532 "iscsi_get_initiator_groups", 00:05:43.532 "nvmf_set_crdt", 00:05:43.532 "nvmf_set_config", 00:05:43.532 "nvmf_set_max_subsystems", 00:05:43.532 "nvmf_subsystem_get_listeners", 00:05:43.532 "nvmf_subsystem_get_qpairs", 00:05:43.532 "nvmf_subsystem_get_controllers", 00:05:43.532 "nvmf_get_stats", 00:05:43.532 "nvmf_get_transports", 00:05:43.532 "nvmf_create_transport", 00:05:43.532 "nvmf_get_targets", 00:05:43.532 "nvmf_delete_target", 00:05:43.532 "nvmf_create_target", 00:05:43.532 "nvmf_subsystem_allow_any_host", 00:05:43.532 "nvmf_subsystem_remove_host", 00:05:43.532 "nvmf_subsystem_add_host", 00:05:43.532 "nvmf_subsystem_remove_ns", 00:05:43.532 "nvmf_subsystem_add_ns", 00:05:43.532 "nvmf_subsystem_listener_set_ana_state", 00:05:43.532 "nvmf_discovery_get_referrals", 00:05:43.532 "nvmf_discovery_remove_referral", 00:05:43.532 "nvmf_discovery_add_referral", 00:05:43.532 "nvmf_subsystem_remove_listener", 00:05:43.532 "nvmf_subsystem_add_listener", 00:05:43.532 "nvmf_delete_subsystem", 00:05:43.532 "nvmf_create_subsystem", 00:05:43.532 "nvmf_get_subsystems", 00:05:43.532 "env_dpdk_get_mem_stats", 00:05:43.532 "nbd_get_disks", 00:05:43.532 "nbd_stop_disk", 00:05:43.532 "nbd_start_disk", 00:05:43.532 "ublk_recover_disk", 00:05:43.532 "ublk_get_disks", 00:05:43.532 "ublk_stop_disk", 00:05:43.532 "ublk_start_disk", 00:05:43.532 "ublk_destroy_target", 00:05:43.532 "ublk_create_target", 00:05:43.532 "virtio_blk_create_transport", 00:05:43.532 "virtio_blk_get_transports", 00:05:43.532 "vhost_controller_set_coalescing", 00:05:43.532 "vhost_get_controllers", 00:05:43.532 "vhost_delete_controller", 00:05:43.532 "vhost_create_blk_controller", 00:05:43.532 "vhost_scsi_controller_remove_target", 00:05:43.532 "vhost_scsi_controller_add_target", 00:05:43.532 "vhost_start_scsi_controller", 00:05:43.532 "vhost_create_scsi_controller", 00:05:43.532 "thread_set_cpumask", 00:05:43.532 "framework_get_scheduler", 00:05:43.532 "framework_set_scheduler", 00:05:43.532 "framework_get_reactors", 00:05:43.532 "thread_get_io_channels", 00:05:43.532 "thread_get_pollers", 00:05:43.532 "thread_get_stats", 00:05:43.532 "framework_monitor_context_switch", 00:05:43.532 "spdk_kill_instance", 00:05:43.532 "log_enable_timestamps", 00:05:43.532 "log_get_flags", 00:05:43.532 "log_clear_flag", 00:05:43.532 "log_set_flag", 00:05:43.532 "log_get_level", 00:05:43.532 "log_set_level", 00:05:43.532 "log_get_print_level", 00:05:43.532 "log_set_print_level", 00:05:43.532 "framework_enable_cpumask_locks", 00:05:43.532 "framework_disable_cpumask_locks", 00:05:43.532 "framework_wait_init", 00:05:43.532 "framework_start_init", 00:05:43.532 "scsi_get_devices", 00:05:43.532 "bdev_get_histogram", 00:05:43.532 "bdev_enable_histogram", 00:05:43.532 "bdev_set_qos_limit", 00:05:43.532 "bdev_set_qd_sampling_period", 00:05:43.532 "bdev_get_bdevs", 00:05:43.532 "bdev_reset_iostat", 00:05:43.532 "bdev_get_iostat", 00:05:43.532 "bdev_examine", 00:05:43.532 "bdev_wait_for_examine", 00:05:43.532 "bdev_set_options", 00:05:43.532 "notify_get_notifications", 00:05:43.532 "notify_get_types", 00:05:43.532 "accel_get_stats", 00:05:43.532 "accel_set_options", 00:05:43.532 "accel_set_driver", 00:05:43.532 "accel_crypto_key_destroy", 00:05:43.532 "accel_crypto_keys_get", 00:05:43.532 "accel_crypto_key_create", 00:05:43.532 "accel_assign_opc", 00:05:43.532 "accel_get_module_info", 00:05:43.532 "accel_get_opc_assignments", 00:05:43.532 "vmd_rescan", 00:05:43.532 "vmd_remove_device", 00:05:43.532 "vmd_enable", 00:05:43.532 "sock_set_default_impl", 00:05:43.532 "sock_impl_set_options", 00:05:43.532 "sock_impl_get_options", 00:05:43.532 "iobuf_get_stats", 00:05:43.532 "iobuf_set_options", 00:05:43.532 "framework_get_pci_devices", 00:05:43.532 "framework_get_config", 00:05:43.532 "framework_get_subsystems", 00:05:43.532 "trace_get_info", 00:05:43.532 "trace_get_tpoint_group_mask", 00:05:43.532 "trace_disable_tpoint_group", 00:05:43.532 "trace_enable_tpoint_group", 00:05:43.532 "trace_clear_tpoint_mask", 00:05:43.532 "trace_set_tpoint_mask", 00:05:43.532 "spdk_get_version", 00:05:43.532 "rpc_get_methods" 00:05:43.532 ] 00:05:43.532 23:13:35 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.532 23:13:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:43.532 23:13:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.532 23:13:35 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.532 23:13:35 -- spdkcli/tcp.sh@38 -- # killprocess 57053 00:05:43.532 23:13:35 -- common/autotest_common.sh@926 -- # '[' -z 57053 ']' 00:05:43.532 23:13:35 -- common/autotest_common.sh@930 -- # kill -0 57053 00:05:43.532 23:13:35 -- common/autotest_common.sh@931 -- # uname 00:05:43.532 23:13:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:43.532 23:13:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57053 00:05:43.532 23:13:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:43.532 23:13:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:43.532 23:13:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57053' 00:05:43.532 killing process with pid 57053 00:05:43.532 23:13:35 -- common/autotest_common.sh@945 -- # kill 57053 00:05:43.532 23:13:35 -- common/autotest_common.sh@950 -- # wait 57053 00:05:46.068 00:05:46.068 real 0m5.174s 00:05:46.068 user 0m9.176s 00:05:46.068 sys 0m0.847s 00:05:46.068 23:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.068 23:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.068 ************************************ 00:05:46.068 END TEST spdkcli_tcp 00:05:46.068 ************************************ 00:05:46.327 23:13:37 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.327 23:13:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.327 23:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.327 23:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.327 ************************************ 00:05:46.327 START TEST dpdk_mem_utility 00:05:46.327 ************************************ 00:05:46.327 23:13:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.327 * Looking for test storage... 00:05:46.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.327 23:13:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.327 23:13:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57181 00:05:46.327 23:13:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.327 23:13:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57181 00:05:46.327 23:13:37 -- common/autotest_common.sh@819 -- # '[' -z 57181 ']' 00:05:46.327 23:13:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.327 23:13:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.327 23:13:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.327 23:13:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.327 23:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.327 [2024-07-26 23:13:38.077766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.327 [2024-07-26 23:13:38.077897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57181 ] 00:05:46.586 [2024-07-26 23:13:38.245215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.845 [2024-07-26 23:13:38.530870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.845 [2024-07-26 23:13:38.531104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.751 23:13:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.751 23:13:40 -- common/autotest_common.sh@852 -- # return 0 00:05:48.751 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.751 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.751 23:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.751 23:13:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.751 { 00:05:48.751 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.751 } 00:05:48.751 23:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.751 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.751 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:48.751 1 heaps totaling size 820.000000 MiB 00:05:48.751 size: 820.000000 MiB heap id: 0 00:05:48.751 end heaps---------- 00:05:48.751 8 mempools totaling size 598.116089 MiB 00:05:48.751 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.751 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.751 size: 84.521057 MiB name: bdev_io_57181 00:05:48.751 size: 51.011292 MiB name: evtpool_57181 00:05:48.751 size: 50.003479 MiB name: msgpool_57181 00:05:48.751 size: 21.763794 MiB name: PDU_Pool 00:05:48.751 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.751 size: 0.026123 MiB name: Session_Pool 00:05:48.751 end mempools------- 00:05:48.751 6 memzones totaling size 4.142822 MiB 00:05:48.751 size: 1.000366 MiB name: RG_ring_0_57181 00:05:48.751 size: 1.000366 MiB name: RG_ring_1_57181 00:05:48.751 size: 1.000366 MiB name: RG_ring_4_57181 00:05:48.751 size: 1.000366 MiB name: RG_ring_5_57181 00:05:48.751 size: 0.125366 MiB name: RG_ring_2_57181 00:05:48.751 size: 0.015991 MiB name: RG_ring_3_57181 00:05:48.751 end memzones------- 00:05:48.751 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.751 heap id: 0 total size: 820.000000 MiB number of busy elements: 302 number of free elements: 18 00:05:48.751 list of free elements. size: 18.451050 MiB 00:05:48.751 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:48.751 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:48.751 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:48.751 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:48.751 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:48.751 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:48.751 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:48.751 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:48.751 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:48.751 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:48.751 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:48.751 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:48.751 element at address: 0x20001b000000 with size: 0.564636 MiB 00:05:48.751 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:48.752 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:48.752 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:48.752 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:48.752 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:48.752 list of standard malloc elements. size: 199.284546 MiB 00:05:48.752 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:48.752 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:48.752 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:48.752 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:48.752 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:48.752 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:48.752 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:48.752 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:48.752 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:48.752 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:48.752 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:48.752 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:48.752 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:48.752 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:48.753 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:48.753 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:48.753 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:48.754 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:48.754 list of memzone associated elements. size: 602.264404 MiB 00:05:48.754 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:48.754 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.754 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:48.754 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.754 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:48.754 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57181_0 00:05:48.754 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:48.754 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57181_0 00:05:48.754 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:48.754 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57181_0 00:05:48.754 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:48.754 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.754 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:48.754 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.754 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:48.754 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57181 00:05:48.754 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:48.754 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57181 00:05:48.754 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:48.754 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57181 00:05:48.754 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:48.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.754 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:48.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.754 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:48.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.754 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:48.754 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.754 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:48.754 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57181 00:05:48.754 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:48.754 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57181 00:05:48.754 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:48.754 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57181 00:05:48.754 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:48.754 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57181 00:05:48.754 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:48.754 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57181 00:05:48.754 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:48.754 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.754 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:48.754 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.754 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:48.754 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.754 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:48.754 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57181 00:05:48.754 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:48.754 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.754 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:48.754 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.754 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:48.754 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57181 00:05:48.754 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:48.754 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.754 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:48.754 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57181 00:05:48.754 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:48.754 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57181 00:05:48.754 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:48.754 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.754 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.754 23:13:40 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57181 00:05:48.754 23:13:40 -- common/autotest_common.sh@926 -- # '[' -z 57181 ']' 00:05:48.754 23:13:40 -- common/autotest_common.sh@930 -- # kill -0 57181 00:05:48.754 23:13:40 -- common/autotest_common.sh@931 -- # uname 00:05:48.754 23:13:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.754 23:13:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57181 00:05:48.754 killing process with pid 57181 00:05:48.754 23:13:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.754 23:13:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.754 23:13:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57181' 00:05:48.754 23:13:40 -- common/autotest_common.sh@945 -- # kill 57181 00:05:48.754 23:13:40 -- common/autotest_common.sh@950 -- # wait 57181 00:05:51.285 ************************************ 00:05:51.285 END TEST dpdk_mem_utility 00:05:51.285 ************************************ 00:05:51.285 00:05:51.285 real 0m5.101s 00:05:51.285 user 0m5.055s 00:05:51.285 sys 0m0.818s 00:05:51.285 23:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.285 23:13:42 -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 23:13:43 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.285 23:13:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.285 23:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.285 23:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 ************************************ 00:05:51.286 START TEST event 00:05:51.286 ************************************ 00:05:51.286 23:13:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.544 * Looking for test storage... 00:05:51.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.544 23:13:43 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.545 23:13:43 -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.545 23:13:43 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.545 23:13:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:51.545 23:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.545 23:13:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.545 ************************************ 00:05:51.545 START TEST event_perf 00:05:51.545 ************************************ 00:05:51.545 23:13:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.545 Running I/O for 1 seconds...[2024-07-26 23:13:43.199767] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.545 [2024-07-26 23:13:43.199885] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57295 ] 00:05:51.813 [2024-07-26 23:13:43.373547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.090 [2024-07-26 23:13:43.643481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.090 [2024-07-26 23:13:43.643650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.090 [2024-07-26 23:13:43.644674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.090 [2024-07-26 23:13:43.644697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.467 Running I/O for 1 seconds... 00:05:53.467 lcore 0: 79176 00:05:53.467 lcore 1: 79180 00:05:53.467 lcore 2: 79168 00:05:53.467 lcore 3: 79172 00:05:53.467 done. 00:05:53.467 00:05:53.467 real 0m1.973s 00:05:53.467 user 0m4.665s 00:05:53.467 sys 0m0.166s 00:05:53.467 23:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.467 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.467 ************************************ 00:05:53.467 END TEST event_perf 00:05:53.467 ************************************ 00:05:53.467 23:13:45 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.467 23:13:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:53.467 23:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.467 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.467 ************************************ 00:05:53.467 START TEST event_reactor 00:05:53.467 ************************************ 00:05:53.467 23:13:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:53.726 [2024-07-26 23:13:45.251535] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.726 [2024-07-26 23:13:45.252164] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57340 ] 00:05:53.726 [2024-07-26 23:13:45.427030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.985 [2024-07-26 23:13:45.707102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.892 test_start 00:05:55.892 oneshot 00:05:55.892 tick 100 00:05:55.892 tick 100 00:05:55.892 tick 250 00:05:55.892 tick 100 00:05:55.892 tick 100 00:05:55.892 tick 100 00:05:55.892 tick 250 00:05:55.892 tick 500 00:05:55.892 tick 100 00:05:55.892 tick 100 00:05:55.892 tick 250 00:05:55.892 tick 100 00:05:55.892 tick 100 00:05:55.892 test_end 00:05:55.892 00:05:55.892 real 0m1.980s 00:05:55.892 user 0m1.721s 00:05:55.892 sys 0m0.147s 00:05:55.892 23:13:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.892 ************************************ 00:05:55.892 END TEST event_reactor 00:05:55.892 ************************************ 00:05:55.892 23:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.892 23:13:47 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.892 23:13:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:55.892 23:13:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.892 23:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.892 ************************************ 00:05:55.892 START TEST event_reactor_perf 00:05:55.892 ************************************ 00:05:55.892 23:13:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:55.892 [2024-07-26 23:13:47.302500] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.892 [2024-07-26 23:13:47.302631] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57382 ] 00:05:55.892 [2024-07-26 23:13:47.474937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.152 [2024-07-26 23:13:47.758932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.531 test_start 00:05:57.531 test_end 00:05:57.531 Performance: 379626 events per second 00:05:57.531 00:05:57.531 real 0m1.961s 00:05:57.531 user 0m1.720s 00:05:57.531 sys 0m0.130s 00:05:57.531 23:13:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.531 23:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.531 ************************************ 00:05:57.531 END TEST event_reactor_perf 00:05:57.531 ************************************ 00:05:57.531 23:13:49 -- event/event.sh@49 -- # uname -s 00:05:57.531 23:13:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.531 23:13:49 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.531 23:13:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.531 23:13:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.531 23:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 ************************************ 00:05:57.790 START TEST event_scheduler 00:05:57.790 ************************************ 00:05:57.790 23:13:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.790 * Looking for test storage... 00:05:57.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.790 23:13:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.790 23:13:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57449 00:05:57.790 23:13:49 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.790 23:13:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.790 23:13:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 57449 00:05:57.790 23:13:49 -- common/autotest_common.sh@819 -- # '[' -z 57449 ']' 00:05:57.790 23:13:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.790 23:13:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.790 23:13:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.790 23:13:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.790 23:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.790 [2024-07-26 23:13:49.525706] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.790 [2024-07-26 23:13:49.525843] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57449 ] 00:05:58.049 [2024-07-26 23:13:49.691182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.307 [2024-07-26 23:13:49.968740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.307 [2024-07-26 23:13:49.968935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.307 [2024-07-26 23:13:49.969557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.307 [2024-07-26 23:13:49.969589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.565 23:13:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.565 23:13:50 -- common/autotest_common.sh@852 -- # return 0 00:05:58.565 23:13:50 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.565 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.566 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:58.566 POWER: Env isn't set yet! 00:05:58.566 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:58.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.566 POWER: Attempting to initialise PSTAT power management... 00:05:58.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.566 POWER: Cannot set governor of lcore 0 to performance 00:05:58.566 POWER: Attempting to initialise AMD PSTATE power management... 00:05:58.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.566 POWER: Attempting to initialise CPPC power management... 00:05:58.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.566 POWER: Attempting to initialise VM power management... 00:05:58.566 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:58.566 POWER: Unable to set Power Management Environment for lcore 0 00:05:58.566 [2024-07-26 23:13:50.318566] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:58.566 [2024-07-26 23:13:50.318592] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:58.566 [2024-07-26 23:13:50.318608] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.566 [2024-07-26 23:13:50.318629] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.566 [2024-07-26 23:13:50.318645] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.566 [2024-07-26 23:13:50.318656] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.823 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.823 23:13:50 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.823 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.823 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 [2024-07-26 23:13:50.747217] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.082 23:13:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.082 23:13:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 ************************************ 00:05:59.082 START TEST scheduler_create_thread 00:05:59.082 ************************************ 00:05:59.082 23:13:50 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 2 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 3 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 4 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 5 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.082 6 00:05:59.082 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.082 23:13:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.082 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.082 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.340 7 00:05:59.340 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.340 23:13:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.340 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.340 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.340 8 00:05:59.340 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.340 23:13:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.340 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.340 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.340 9 00:05:59.340 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.340 23:13:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.340 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.340 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:59.340 10 00:05:59.340 23:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.341 23:13:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.341 23:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.341 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:00.715 23:13:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:00.715 23:13:52 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:00.715 23:13:52 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:00.715 23:13:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:00.715 23:13:52 -- common/autotest_common.sh@10 -- # set +x 00:06:01.281 23:13:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.539 23:13:53 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:01.539 23:13:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.539 23:13:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.472 23:13:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.472 23:13:53 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:02.472 23:13:53 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:02.472 23:13:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.472 23:13:53 -- common/autotest_common.sh@10 -- # set +x 00:06:03.039 ************************************ 00:06:03.039 END TEST scheduler_create_thread 00:06:03.039 ************************************ 00:06:03.039 23:13:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.039 00:06:03.039 real 0m3.885s 00:06:03.039 user 0m0.022s 00:06:03.039 sys 0m0.009s 00:06:03.039 23:13:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.039 23:13:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.039 23:13:54 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:03.039 23:13:54 -- scheduler/scheduler.sh@46 -- # killprocess 57449 00:06:03.039 23:13:54 -- common/autotest_common.sh@926 -- # '[' -z 57449 ']' 00:06:03.039 23:13:54 -- common/autotest_common.sh@930 -- # kill -0 57449 00:06:03.039 23:13:54 -- common/autotest_common.sh@931 -- # uname 00:06:03.039 23:13:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:03.039 23:13:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57449 00:06:03.039 killing process with pid 57449 00:06:03.039 23:13:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:03.039 23:13:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:03.039 23:13:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57449' 00:06:03.039 23:13:54 -- common/autotest_common.sh@945 -- # kill 57449 00:06:03.039 23:13:54 -- common/autotest_common.sh@950 -- # wait 57449 00:06:03.297 [2024-07-26 23:13:55.031198] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.715 00:06:04.715 real 0m7.118s 00:06:04.715 user 0m13.764s 00:06:04.715 sys 0m0.601s 00:06:04.715 ************************************ 00:06:04.715 END TEST event_scheduler 00:06:04.715 ************************************ 00:06:04.715 23:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.715 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.975 23:13:56 -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.975 23:13:56 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.975 23:13:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.975 23:13:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.975 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.975 ************************************ 00:06:04.975 START TEST app_repeat 00:06:04.975 ************************************ 00:06:04.975 23:13:56 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:04.975 23:13:56 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.975 23:13:56 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.975 23:13:56 -- event/event.sh@13 -- # local nbd_list 00:06:04.975 23:13:56 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.975 23:13:56 -- event/event.sh@14 -- # local bdev_list 00:06:04.975 23:13:56 -- event/event.sh@15 -- # local repeat_times=4 00:06:04.975 23:13:56 -- event/event.sh@17 -- # modprobe nbd 00:06:04.975 23:13:56 -- event/event.sh@19 -- # repeat_pid=57577 00:06:04.975 23:13:56 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.975 23:13:56 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.975 Process app_repeat pid: 57577 00:06:04.975 23:13:56 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57577' 00:06:04.975 23:13:56 -- event/event.sh@23 -- # for i in {0..2} 00:06:04.975 spdk_app_start Round 0 00:06:04.975 23:13:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.975 23:13:56 -- event/event.sh@25 -- # waitforlisten 57577 /var/tmp/spdk-nbd.sock 00:06:04.975 23:13:56 -- common/autotest_common.sh@819 -- # '[' -z 57577 ']' 00:06:04.975 23:13:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.975 23:13:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.975 23:13:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.975 23:13:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.975 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.975 [2024-07-26 23:13:56.561175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:04.975 [2024-07-26 23:13:56.561283] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57577 ] 00:06:05.235 [2024-07-26 23:13:56.736252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.235 [2024-07-26 23:13:56.987508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.235 [2024-07-26 23:13:56.987541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.613 23:13:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.613 23:13:58 -- common/autotest_common.sh@852 -- # return 0 00:06:06.613 23:13:58 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.613 Malloc0 00:06:06.613 23:13:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.872 Malloc1 00:06:06.872 23:13:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.872 23:13:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.131 /dev/nbd0 00:06:07.131 23:13:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.131 23:13:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.131 23:13:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:07.131 23:13:58 -- common/autotest_common.sh@857 -- # local i 00:06:07.131 23:13:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:07.131 23:13:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:07.131 23:13:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:07.131 23:13:58 -- common/autotest_common.sh@861 -- # break 00:06:07.131 23:13:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:07.131 23:13:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:07.131 23:13:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.131 1+0 records in 00:06:07.131 1+0 records out 00:06:07.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128887 s, 3.2 MB/s 00:06:07.131 23:13:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.131 23:13:58 -- common/autotest_common.sh@874 -- # size=4096 00:06:07.131 23:13:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.131 23:13:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:07.131 23:13:58 -- common/autotest_common.sh@877 -- # return 0 00:06:07.131 23:13:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.131 23:13:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.131 23:13:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.391 /dev/nbd1 00:06:07.391 23:13:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.391 23:13:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.391 23:13:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:07.391 23:13:58 -- common/autotest_common.sh@857 -- # local i 00:06:07.391 23:13:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:07.391 23:13:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:07.391 23:13:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:07.391 23:13:58 -- common/autotest_common.sh@861 -- # break 00:06:07.391 23:13:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:07.391 23:13:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:07.391 23:13:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.391 1+0 records in 00:06:07.391 1+0 records out 00:06:07.391 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381137 s, 10.7 MB/s 00:06:07.391 23:13:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.391 23:13:59 -- common/autotest_common.sh@874 -- # size=4096 00:06:07.391 23:13:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.391 23:13:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:07.391 23:13:59 -- common/autotest_common.sh@877 -- # return 0 00:06:07.391 23:13:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.391 23:13:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.391 23:13:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.391 23:13:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.391 23:13:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.651 { 00:06:07.651 "nbd_device": "/dev/nbd0", 00:06:07.651 "bdev_name": "Malloc0" 00:06:07.651 }, 00:06:07.651 { 00:06:07.651 "nbd_device": "/dev/nbd1", 00:06:07.651 "bdev_name": "Malloc1" 00:06:07.651 } 00:06:07.651 ]' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.651 { 00:06:07.651 "nbd_device": "/dev/nbd0", 00:06:07.651 "bdev_name": "Malloc0" 00:06:07.651 }, 00:06:07.651 { 00:06:07.651 "nbd_device": "/dev/nbd1", 00:06:07.651 "bdev_name": "Malloc1" 00:06:07.651 } 00:06:07.651 ]' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.651 /dev/nbd1' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.651 /dev/nbd1' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.651 256+0 records in 00:06:07.651 256+0 records out 00:06:07.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130944 s, 80.1 MB/s 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.651 256+0 records in 00:06:07.651 256+0 records out 00:06:07.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275842 s, 38.0 MB/s 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.651 256+0 records in 00:06:07.651 256+0 records out 00:06:07.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318913 s, 32.9 MB/s 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@51 -- # local i 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.651 23:13:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@41 -- # break 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.911 23:13:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@41 -- # break 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.169 23:13:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@65 -- # true 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.429 23:13:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.429 23:13:59 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.688 23:14:00 -- event/event.sh@35 -- # sleep 3 00:06:10.062 [2024-07-26 23:14:01.743100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.320 [2024-07-26 23:14:01.983654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.320 [2024-07-26 23:14:01.983659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.578 [2024-07-26 23:14:02.244655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.578 [2024-07-26 23:14:02.244738] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.953 23:14:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.954 spdk_app_start Round 1 00:06:11.954 23:14:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.954 23:14:03 -- event/event.sh@25 -- # waitforlisten 57577 /var/tmp/spdk-nbd.sock 00:06:11.954 23:14:03 -- common/autotest_common.sh@819 -- # '[' -z 57577 ']' 00:06:11.954 23:14:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.954 23:14:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.954 23:14:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.954 23:14:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.954 23:14:03 -- common/autotest_common.sh@10 -- # set +x 00:06:11.954 23:14:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.954 23:14:03 -- common/autotest_common.sh@852 -- # return 0 00:06:11.954 23:14:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.212 Malloc0 00:06:12.212 23:14:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.469 Malloc1 00:06:12.469 23:14:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.469 23:14:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.470 23:14:04 -- bdev/nbd_common.sh@12 -- # local i 00:06:12.470 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.470 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.470 23:14:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.728 /dev/nbd0 00:06:12.728 23:14:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.728 23:14:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.728 23:14:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.728 23:14:04 -- common/autotest_common.sh@857 -- # local i 00:06:12.728 23:14:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.728 23:14:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.728 23:14:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.728 23:14:04 -- common/autotest_common.sh@861 -- # break 00:06:12.728 23:14:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.728 23:14:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.728 23:14:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.728 1+0 records in 00:06:12.728 1+0 records out 00:06:12.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305195 s, 13.4 MB/s 00:06:12.728 23:14:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.728 23:14:04 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.728 23:14:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.728 23:14:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.728 23:14:04 -- common/autotest_common.sh@877 -- # return 0 00:06:12.728 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.728 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.728 23:14:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.728 /dev/nbd1 00:06:12.986 23:14:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.986 23:14:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.986 23:14:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.986 23:14:04 -- common/autotest_common.sh@857 -- # local i 00:06:12.986 23:14:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.986 23:14:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.986 23:14:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:12.986 23:14:04 -- common/autotest_common.sh@861 -- # break 00:06:12.986 23:14:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.986 23:14:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.987 23:14:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.987 1+0 records in 00:06:12.987 1+0 records out 00:06:12.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250075 s, 16.4 MB/s 00:06:12.987 23:14:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.987 23:14:04 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.987 23:14:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.987 23:14:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.987 23:14:04 -- common/autotest_common.sh@877 -- # return 0 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.987 { 00:06:12.987 "nbd_device": "/dev/nbd0", 00:06:12.987 "bdev_name": "Malloc0" 00:06:12.987 }, 00:06:12.987 { 00:06:12.987 "nbd_device": "/dev/nbd1", 00:06:12.987 "bdev_name": "Malloc1" 00:06:12.987 } 00:06:12.987 ]' 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.987 { 00:06:12.987 "nbd_device": "/dev/nbd0", 00:06:12.987 "bdev_name": "Malloc0" 00:06:12.987 }, 00:06:12.987 { 00:06:12.987 "nbd_device": "/dev/nbd1", 00:06:12.987 "bdev_name": "Malloc1" 00:06:12.987 } 00:06:12.987 ]' 00:06:12.987 23:14:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.245 /dev/nbd1' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.245 /dev/nbd1' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.245 256+0 records in 00:06:13.245 256+0 records out 00:06:13.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126669 s, 82.8 MB/s 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.245 256+0 records in 00:06:13.245 256+0 records out 00:06:13.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228561 s, 45.9 MB/s 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.245 256+0 records in 00:06:13.245 256+0 records out 00:06:13.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335608 s, 31.2 MB/s 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@51 -- # local i 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.245 23:14:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@41 -- # break 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.503 23:14:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@41 -- # break 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@65 -- # true 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.761 23:14:05 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.761 23:14:05 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.328 23:14:05 -- event/event.sh@35 -- # sleep 3 00:06:15.703 [2024-07-26 23:14:07.286942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.962 [2024-07-26 23:14:07.523406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.962 [2024-07-26 23:14:07.523426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.220 [2024-07-26 23:14:07.783737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.220 [2024-07-26 23:14:07.783821] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.153 23:14:08 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.153 spdk_app_start Round 2 00:06:17.153 23:14:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:17.153 23:14:08 -- event/event.sh@25 -- # waitforlisten 57577 /var/tmp/spdk-nbd.sock 00:06:17.153 23:14:08 -- common/autotest_common.sh@819 -- # '[' -z 57577 ']' 00:06:17.153 23:14:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.153 23:14:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.153 23:14:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.153 23:14:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.153 23:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:17.412 23:14:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.412 23:14:09 -- common/autotest_common.sh@852 -- # return 0 00:06:17.412 23:14:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.669 Malloc0 00:06:17.670 23:14:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.929 Malloc1 00:06:17.929 23:14:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.929 23:14:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.188 /dev/nbd0 00:06:18.188 23:14:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.188 23:14:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.188 23:14:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:18.188 23:14:09 -- common/autotest_common.sh@857 -- # local i 00:06:18.188 23:14:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.188 23:14:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.188 23:14:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:18.188 23:14:09 -- common/autotest_common.sh@861 -- # break 00:06:18.188 23:14:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.188 23:14:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.188 23:14:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.188 1+0 records in 00:06:18.188 1+0 records out 00:06:18.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471022 s, 8.7 MB/s 00:06:18.188 23:14:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.188 23:14:09 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.188 23:14:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.188 23:14:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.188 23:14:09 -- common/autotest_common.sh@877 -- # return 0 00:06:18.188 23:14:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.188 23:14:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.188 23:14:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.447 /dev/nbd1 00:06:18.447 23:14:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.447 23:14:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.447 23:14:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:18.447 23:14:09 -- common/autotest_common.sh@857 -- # local i 00:06:18.447 23:14:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.447 23:14:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.447 23:14:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:18.447 23:14:09 -- common/autotest_common.sh@861 -- # break 00:06:18.447 23:14:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.447 23:14:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.447 23:14:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.447 1+0 records in 00:06:18.447 1+0 records out 00:06:18.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219609 s, 18.7 MB/s 00:06:18.447 23:14:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.447 23:14:09 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.447 23:14:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.447 23:14:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.447 23:14:10 -- common/autotest_common.sh@877 -- # return 0 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.447 23:14:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.447 { 00:06:18.447 "nbd_device": "/dev/nbd0", 00:06:18.447 "bdev_name": "Malloc0" 00:06:18.447 }, 00:06:18.447 { 00:06:18.448 "nbd_device": "/dev/nbd1", 00:06:18.448 "bdev_name": "Malloc1" 00:06:18.448 } 00:06:18.448 ]' 00:06:18.448 23:14:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.448 { 00:06:18.448 "nbd_device": "/dev/nbd0", 00:06:18.448 "bdev_name": "Malloc0" 00:06:18.448 }, 00:06:18.448 { 00:06:18.448 "nbd_device": "/dev/nbd1", 00:06:18.448 "bdev_name": "Malloc1" 00:06:18.448 } 00:06:18.448 ]' 00:06:18.448 23:14:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.706 /dev/nbd1' 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.706 /dev/nbd1' 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.706 256+0 records in 00:06:18.706 256+0 records out 00:06:18.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00576552 s, 182 MB/s 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.706 256+0 records in 00:06:18.706 256+0 records out 00:06:18.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306374 s, 34.2 MB/s 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.706 256+0 records in 00:06:18.706 256+0 records out 00:06:18.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309842 s, 33.8 MB/s 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.706 23:14:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.707 23:14:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@41 -- # break 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@41 -- # break 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.966 23:14:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@65 -- # true 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.226 23:14:10 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.226 23:14:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.794 23:14:11 -- event/event.sh@35 -- # sleep 3 00:06:21.174 [2024-07-26 23:14:12.746267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.433 [2024-07-26 23:14:12.984054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.433 [2024-07-26 23:14:12.984059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.693 [2024-07-26 23:14:13.245023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.693 [2024-07-26 23:14:13.245118] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.629 23:14:14 -- event/event.sh@38 -- # waitforlisten 57577 /var/tmp/spdk-nbd.sock 00:06:22.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.629 23:14:14 -- common/autotest_common.sh@819 -- # '[' -z 57577 ']' 00:06:22.629 23:14:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.629 23:14:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.629 23:14:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.629 23:14:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.629 23:14:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.888 23:14:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.888 23:14:14 -- common/autotest_common.sh@852 -- # return 0 00:06:22.888 23:14:14 -- event/event.sh@39 -- # killprocess 57577 00:06:22.888 23:14:14 -- common/autotest_common.sh@926 -- # '[' -z 57577 ']' 00:06:22.888 23:14:14 -- common/autotest_common.sh@930 -- # kill -0 57577 00:06:22.888 23:14:14 -- common/autotest_common.sh@931 -- # uname 00:06:22.888 23:14:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:22.888 23:14:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57577 00:06:22.888 killing process with pid 57577 00:06:22.888 23:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:22.888 23:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:22.888 23:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57577' 00:06:22.888 23:14:14 -- common/autotest_common.sh@945 -- # kill 57577 00:06:22.888 23:14:14 -- common/autotest_common.sh@950 -- # wait 57577 00:06:24.266 spdk_app_start is called in Round 0. 00:06:24.266 Shutdown signal received, stop current app iteration 00:06:24.266 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.266 spdk_app_start is called in Round 1. 00:06:24.266 Shutdown signal received, stop current app iteration 00:06:24.266 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.266 spdk_app_start is called in Round 2. 00:06:24.266 Shutdown signal received, stop current app iteration 00:06:24.266 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.266 spdk_app_start is called in Round 3. 00:06:24.267 Shutdown signal received, stop current app iteration 00:06:24.267 23:14:15 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.267 23:14:15 -- event/event.sh@42 -- # return 0 00:06:24.267 00:06:24.267 real 0m19.374s 00:06:24.267 user 0m38.764s 00:06:24.267 sys 0m3.267s 00:06:24.267 23:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.267 ************************************ 00:06:24.267 END TEST app_repeat 00:06:24.267 ************************************ 00:06:24.267 23:14:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.267 23:14:15 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.267 23:14:15 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.267 23:14:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.267 23:14:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.267 23:14:15 -- common/autotest_common.sh@10 -- # set +x 00:06:24.267 ************************************ 00:06:24.267 START TEST cpu_locks 00:06:24.267 ************************************ 00:06:24.267 23:14:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.526 * Looking for test storage... 00:06:24.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.526 23:14:16 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.526 23:14:16 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.526 23:14:16 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.526 23:14:16 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.526 23:14:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.526 23:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.526 23:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 ************************************ 00:06:24.526 START TEST default_locks 00:06:24.526 ************************************ 00:06:24.526 23:14:16 -- common/autotest_common.sh@1104 -- # default_locks 00:06:24.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.526 23:14:16 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58012 00:06:24.526 23:14:16 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.526 23:14:16 -- event/cpu_locks.sh@47 -- # waitforlisten 58012 00:06:24.526 23:14:16 -- common/autotest_common.sh@819 -- # '[' -z 58012 ']' 00:06:24.526 23:14:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.526 23:14:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.526 23:14:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.526 23:14:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.526 23:14:16 -- common/autotest_common.sh@10 -- # set +x 00:06:24.526 [2024-07-26 23:14:16.201397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:24.526 [2024-07-26 23:14:16.201520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:06:24.785 [2024-07-26 23:14:16.376008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.044 [2024-07-26 23:14:16.641790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.044 [2024-07-26 23:14:16.642034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.948 23:14:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.948 23:14:18 -- common/autotest_common.sh@852 -- # return 0 00:06:26.948 23:14:18 -- event/cpu_locks.sh@49 -- # locks_exist 58012 00:06:26.948 23:14:18 -- event/cpu_locks.sh@22 -- # lslocks -p 58012 00:06:26.948 23:14:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.948 23:14:18 -- event/cpu_locks.sh@50 -- # killprocess 58012 00:06:26.948 23:14:18 -- common/autotest_common.sh@926 -- # '[' -z 58012 ']' 00:06:26.948 23:14:18 -- common/autotest_common.sh@930 -- # kill -0 58012 00:06:26.948 23:14:18 -- common/autotest_common.sh@931 -- # uname 00:06:26.948 23:14:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.948 23:14:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58012 00:06:27.208 killing process with pid 58012 00:06:27.208 23:14:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.208 23:14:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.208 23:14:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58012' 00:06:27.208 23:14:18 -- common/autotest_common.sh@945 -- # kill 58012 00:06:27.208 23:14:18 -- common/autotest_common.sh@950 -- # wait 58012 00:06:29.745 23:14:21 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58012 00:06:29.745 23:14:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:29.745 23:14:21 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58012 00:06:29.745 23:14:21 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:29.745 23:14:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.745 23:14:21 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:29.745 23:14:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:29.745 23:14:21 -- common/autotest_common.sh@643 -- # waitforlisten 58012 00:06:29.745 23:14:21 -- common/autotest_common.sh@819 -- # '[' -z 58012 ']' 00:06:29.745 23:14:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.745 ERROR: process (pid: 58012) is no longer running 00:06:29.745 23:14:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.745 23:14:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.745 23:14:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.745 23:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.745 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58012) - No such process 00:06:29.745 23:14:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.745 23:14:21 -- common/autotest_common.sh@852 -- # return 1 00:06:29.745 23:14:21 -- common/autotest_common.sh@643 -- # es=1 00:06:29.745 23:14:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.745 23:14:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.745 23:14:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.745 23:14:21 -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.745 23:14:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.745 23:14:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.745 23:14:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.745 00:06:29.745 real 0m5.151s 00:06:29.745 user 0m5.120s 00:06:29.745 sys 0m0.888s 00:06:29.745 23:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.745 ************************************ 00:06:29.745 END TEST default_locks 00:06:29.745 ************************************ 00:06:29.746 23:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.746 23:14:21 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.746 23:14:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.746 23:14:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.746 23:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.746 ************************************ 00:06:29.746 START TEST default_locks_via_rpc 00:06:29.746 ************************************ 00:06:29.746 23:14:21 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:29.746 23:14:21 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58104 00:06:29.746 23:14:21 -- event/cpu_locks.sh@63 -- # waitforlisten 58104 00:06:29.746 23:14:21 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.746 23:14:21 -- common/autotest_common.sh@819 -- # '[' -z 58104 ']' 00:06:29.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.746 23:14:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.746 23:14:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.746 23:14:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.746 23:14:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.746 23:14:21 -- common/autotest_common.sh@10 -- # set +x 00:06:29.746 [2024-07-26 23:14:21.429647] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:29.746 [2024-07-26 23:14:21.429782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58104 ] 00:06:30.005 [2024-07-26 23:14:21.601340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.265 [2024-07-26 23:14:21.866924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.265 [2024-07-26 23:14:21.867185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.169 23:14:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.169 23:14:23 -- common/autotest_common.sh@852 -- # return 0 00:06:32.169 23:14:23 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.169 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:32.169 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.169 23:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:32.169 23:14:23 -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.169 23:14:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.169 23:14:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.169 23:14:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.169 23:14:23 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.169 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:32.169 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.169 23:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:32.169 23:14:23 -- event/cpu_locks.sh@71 -- # locks_exist 58104 00:06:32.169 23:14:23 -- event/cpu_locks.sh@22 -- # lslocks -p 58104 00:06:32.169 23:14:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.438 23:14:23 -- event/cpu_locks.sh@73 -- # killprocess 58104 00:06:32.438 23:14:23 -- common/autotest_common.sh@926 -- # '[' -z 58104 ']' 00:06:32.438 23:14:23 -- common/autotest_common.sh@930 -- # kill -0 58104 00:06:32.438 23:14:23 -- common/autotest_common.sh@931 -- # uname 00:06:32.438 23:14:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.438 23:14:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58104 00:06:32.438 killing process with pid 58104 00:06:32.438 23:14:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.438 23:14:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.438 23:14:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58104' 00:06:32.438 23:14:24 -- common/autotest_common.sh@945 -- # kill 58104 00:06:32.438 23:14:24 -- common/autotest_common.sh@950 -- # wait 58104 00:06:34.973 00:06:34.973 real 0m5.250s 00:06:34.973 user 0m5.270s 00:06:34.973 sys 0m0.874s 00:06:34.973 23:14:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.973 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.973 ************************************ 00:06:34.973 END TEST default_locks_via_rpc 00:06:34.973 ************************************ 00:06:34.973 23:14:26 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.973 23:14:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.973 23:14:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.973 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:06:34.973 ************************************ 00:06:34.973 START TEST non_locking_app_on_locked_coremask 00:06:34.973 ************************************ 00:06:34.973 23:14:26 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:34.973 23:14:26 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58193 00:06:34.973 23:14:26 -- event/cpu_locks.sh@81 -- # waitforlisten 58193 /var/tmp/spdk.sock 00:06:34.973 23:14:26 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.973 23:14:26 -- common/autotest_common.sh@819 -- # '[' -z 58193 ']' 00:06:34.973 23:14:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.973 23:14:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.973 23:14:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.973 23:14:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.973 23:14:26 -- common/autotest_common.sh@10 -- # set +x 00:06:35.231 [2024-07-26 23:14:26.754118] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.232 [2024-07-26 23:14:26.754973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:06:35.232 [2024-07-26 23:14:26.929399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.491 [2024-07-26 23:14:27.206661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.491 [2024-07-26 23:14:27.206883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.396 23:14:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.396 23:14:28 -- common/autotest_common.sh@852 -- # return 0 00:06:37.396 23:14:28 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58224 00:06:37.396 23:14:28 -- event/cpu_locks.sh@85 -- # waitforlisten 58224 /var/tmp/spdk2.sock 00:06:37.396 23:14:28 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.396 23:14:28 -- common/autotest_common.sh@819 -- # '[' -z 58224 ']' 00:06:37.396 23:14:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.396 23:14:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.396 23:14:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.396 23:14:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.396 23:14:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.396 [2024-07-26 23:14:28.971568] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.396 [2024-07-26 23:14:28.971873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58224 ] 00:06:37.397 [2024-07-26 23:14:29.138312] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.397 [2024-07-26 23:14:29.138367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.964 [2024-07-26 23:14:29.677990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.964 [2024-07-26 23:14:29.678198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.500 23:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.500 23:14:31 -- common/autotest_common.sh@852 -- # return 0 00:06:40.500 23:14:31 -- event/cpu_locks.sh@87 -- # locks_exist 58193 00:06:40.500 23:14:31 -- event/cpu_locks.sh@22 -- # lslocks -p 58193 00:06:40.500 23:14:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.069 23:14:32 -- event/cpu_locks.sh@89 -- # killprocess 58193 00:06:41.069 23:14:32 -- common/autotest_common.sh@926 -- # '[' -z 58193 ']' 00:06:41.069 23:14:32 -- common/autotest_common.sh@930 -- # kill -0 58193 00:06:41.069 23:14:32 -- common/autotest_common.sh@931 -- # uname 00:06:41.069 23:14:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.069 23:14:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58193 00:06:41.069 killing process with pid 58193 00:06:41.069 23:14:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.069 23:14:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.069 23:14:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58193' 00:06:41.069 23:14:32 -- common/autotest_common.sh@945 -- # kill 58193 00:06:41.069 23:14:32 -- common/autotest_common.sh@950 -- # wait 58193 00:06:46.351 23:14:37 -- event/cpu_locks.sh@90 -- # killprocess 58224 00:06:46.351 23:14:37 -- common/autotest_common.sh@926 -- # '[' -z 58224 ']' 00:06:46.351 23:14:37 -- common/autotest_common.sh@930 -- # kill -0 58224 00:06:46.351 23:14:37 -- common/autotest_common.sh@931 -- # uname 00:06:46.351 23:14:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.351 23:14:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58224 00:06:46.351 killing process with pid 58224 00:06:46.351 23:14:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.351 23:14:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.351 23:14:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58224' 00:06:46.351 23:14:37 -- common/autotest_common.sh@945 -- # kill 58224 00:06:46.351 23:14:37 -- common/autotest_common.sh@950 -- # wait 58224 00:06:47.743 ************************************ 00:06:47.743 END TEST non_locking_app_on_locked_coremask 00:06:47.743 ************************************ 00:06:47.743 00:06:47.743 real 0m12.846s 00:06:47.743 user 0m13.325s 00:06:47.743 sys 0m1.808s 00:06:47.743 23:14:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.743 23:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.001 23:14:39 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.001 23:14:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.001 23:14:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.001 23:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.001 ************************************ 00:06:48.001 START TEST locking_app_on_unlocked_coremask 00:06:48.001 ************************************ 00:06:48.001 23:14:39 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:48.001 23:14:39 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58385 00:06:48.001 23:14:39 -- event/cpu_locks.sh@99 -- # waitforlisten 58385 /var/tmp/spdk.sock 00:06:48.001 23:14:39 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.001 23:14:39 -- common/autotest_common.sh@819 -- # '[' -z 58385 ']' 00:06:48.001 23:14:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.001 23:14:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.001 23:14:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.001 23:14:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.001 23:14:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.001 [2024-07-26 23:14:39.682562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:48.001 [2024-07-26 23:14:39.682923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58385 ] 00:06:48.261 [2024-07-26 23:14:39.857256] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.261 [2024-07-26 23:14:39.857451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.520 [2024-07-26 23:14:40.066347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.520 [2024-07-26 23:14:40.066537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.458 23:14:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.458 23:14:41 -- common/autotest_common.sh@852 -- # return 0 00:06:49.458 23:14:41 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58408 00:06:49.458 23:14:41 -- event/cpu_locks.sh@103 -- # waitforlisten 58408 /var/tmp/spdk2.sock 00:06:49.458 23:14:41 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.458 23:14:41 -- common/autotest_common.sh@819 -- # '[' -z 58408 ']' 00:06:49.458 23:14:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.458 23:14:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.458 23:14:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.458 23:14:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.458 23:14:41 -- common/autotest_common.sh@10 -- # set +x 00:06:49.717 [2024-07-26 23:14:41.214320] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.717 [2024-07-26 23:14:41.214631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:06:49.717 [2024-07-26 23:14:41.385502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.286 [2024-07-26 23:14:41.786791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.286 [2024-07-26 23:14:41.786984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.825 23:14:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.825 23:14:43 -- common/autotest_common.sh@852 -- # return 0 00:06:52.825 23:14:43 -- event/cpu_locks.sh@105 -- # locks_exist 58408 00:06:52.825 23:14:43 -- event/cpu_locks.sh@22 -- # lslocks -p 58408 00:06:52.825 23:14:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.760 23:14:45 -- event/cpu_locks.sh@107 -- # killprocess 58385 00:06:53.760 23:14:45 -- common/autotest_common.sh@926 -- # '[' -z 58385 ']' 00:06:53.760 23:14:45 -- common/autotest_common.sh@930 -- # kill -0 58385 00:06:53.760 23:14:45 -- common/autotest_common.sh@931 -- # uname 00:06:53.760 23:14:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.760 23:14:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58385 00:06:53.760 killing process with pid 58385 00:06:53.760 23:14:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.760 23:14:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.760 23:14:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58385' 00:06:53.760 23:14:45 -- common/autotest_common.sh@945 -- # kill 58385 00:06:53.760 23:14:45 -- common/autotest_common.sh@950 -- # wait 58385 00:06:59.035 23:14:49 -- event/cpu_locks.sh@108 -- # killprocess 58408 00:06:59.035 23:14:49 -- common/autotest_common.sh@926 -- # '[' -z 58408 ']' 00:06:59.035 23:14:49 -- common/autotest_common.sh@930 -- # kill -0 58408 00:06:59.035 23:14:49 -- common/autotest_common.sh@931 -- # uname 00:06:59.035 23:14:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:59.035 23:14:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58408 00:06:59.035 killing process with pid 58408 00:06:59.035 23:14:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:59.035 23:14:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:59.035 23:14:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58408' 00:06:59.035 23:14:49 -- common/autotest_common.sh@945 -- # kill 58408 00:06:59.035 23:14:49 -- common/autotest_common.sh@950 -- # wait 58408 00:07:00.943 ************************************ 00:07:00.943 END TEST locking_app_on_unlocked_coremask 00:07:00.943 ************************************ 00:07:00.943 00:07:00.943 real 0m12.890s 00:07:00.943 user 0m13.499s 00:07:00.943 sys 0m1.743s 00:07:00.943 23:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.943 23:14:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.943 23:14:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:00.943 23:14:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.943 23:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.943 23:14:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.943 ************************************ 00:07:00.943 START TEST locking_app_on_locked_coremask 00:07:00.943 ************************************ 00:07:00.943 23:14:52 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:00.943 23:14:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58567 00:07:00.943 23:14:52 -- event/cpu_locks.sh@116 -- # waitforlisten 58567 /var/tmp/spdk.sock 00:07:00.943 23:14:52 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.943 23:14:52 -- common/autotest_common.sh@819 -- # '[' -z 58567 ']' 00:07:00.943 23:14:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.943 23:14:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:00.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.943 23:14:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.943 23:14:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:00.943 23:14:52 -- common/autotest_common.sh@10 -- # set +x 00:07:00.943 [2024-07-26 23:14:52.637911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.943 [2024-07-26 23:14:52.638030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58567 ] 00:07:01.203 [2024-07-26 23:14:52.806311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.462 [2024-07-26 23:14:53.060558] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.462 [2024-07-26 23:14:53.060763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.399 23:14:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:02.399 23:14:54 -- common/autotest_common.sh@852 -- # return 0 00:07:02.399 23:14:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58596 00:07:02.399 23:14:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58596 /var/tmp/spdk2.sock 00:07:02.399 23:14:54 -- common/autotest_common.sh@640 -- # local es=0 00:07:02.399 23:14:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:02.399 23:14:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58596 /var/tmp/spdk2.sock 00:07:02.399 23:14:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:02.399 23:14:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:02.399 23:14:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:02.399 23:14:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:02.399 23:14:54 -- common/autotest_common.sh@643 -- # waitforlisten 58596 /var/tmp/spdk2.sock 00:07:02.399 23:14:54 -- common/autotest_common.sh@819 -- # '[' -z 58596 ']' 00:07:02.399 23:14:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.399 23:14:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:02.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.399 23:14:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.399 23:14:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:02.399 23:14:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.661 [2024-07-26 23:14:54.176056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:02.661 [2024-07-26 23:14:54.176172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58596 ] 00:07:02.661 [2024-07-26 23:14:54.343754] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58567 has claimed it. 00:07:02.661 [2024-07-26 23:14:54.343812] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.228 ERROR: process (pid: 58596) is no longer running 00:07:03.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58596) - No such process 00:07:03.228 23:14:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.228 23:14:54 -- common/autotest_common.sh@852 -- # return 1 00:07:03.228 23:14:54 -- common/autotest_common.sh@643 -- # es=1 00:07:03.228 23:14:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:03.228 23:14:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:03.229 23:14:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:03.229 23:14:54 -- event/cpu_locks.sh@122 -- # locks_exist 58567 00:07:03.229 23:14:54 -- event/cpu_locks.sh@22 -- # lslocks -p 58567 00:07:03.229 23:14:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.487 23:14:55 -- event/cpu_locks.sh@124 -- # killprocess 58567 00:07:03.487 23:14:55 -- common/autotest_common.sh@926 -- # '[' -z 58567 ']' 00:07:03.487 23:14:55 -- common/autotest_common.sh@930 -- # kill -0 58567 00:07:03.487 23:14:55 -- common/autotest_common.sh@931 -- # uname 00:07:03.487 23:14:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:03.487 23:14:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58567 00:07:03.747 23:14:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:03.747 killing process with pid 58567 00:07:03.747 23:14:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:03.747 23:14:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58567' 00:07:03.747 23:14:55 -- common/autotest_common.sh@945 -- # kill 58567 00:07:03.747 23:14:55 -- common/autotest_common.sh@950 -- # wait 58567 00:07:06.283 00:07:06.283 real 0m5.217s 00:07:06.283 user 0m5.270s 00:07:06.283 sys 0m1.016s 00:07:06.283 ************************************ 00:07:06.283 END TEST locking_app_on_locked_coremask 00:07:06.283 ************************************ 00:07:06.283 23:14:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.283 23:14:57 -- common/autotest_common.sh@10 -- # set +x 00:07:06.283 23:14:57 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.283 23:14:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.283 23:14:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.283 23:14:57 -- common/autotest_common.sh@10 -- # set +x 00:07:06.283 ************************************ 00:07:06.283 START TEST locking_overlapped_coremask 00:07:06.283 ************************************ 00:07:06.283 23:14:57 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:06.283 23:14:57 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58660 00:07:06.283 23:14:57 -- event/cpu_locks.sh@133 -- # waitforlisten 58660 /var/tmp/spdk.sock 00:07:06.283 23:14:57 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.283 23:14:57 -- common/autotest_common.sh@819 -- # '[' -z 58660 ']' 00:07:06.283 23:14:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.283 23:14:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.283 23:14:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.283 23:14:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.283 23:14:57 -- common/autotest_common.sh@10 -- # set +x 00:07:06.283 [2024-07-26 23:14:57.931105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:06.283 [2024-07-26 23:14:57.931234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58660 ] 00:07:06.541 [2024-07-26 23:14:58.104697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.799 [2024-07-26 23:14:58.359502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.799 [2024-07-26 23:14:58.360000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.799 [2024-07-26 23:14:58.360102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.799 [2024-07-26 23:14:58.360141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.696 23:15:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.696 23:15:00 -- common/autotest_common.sh@852 -- # return 0 00:07:08.696 23:15:00 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:08.696 23:15:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58699 00:07:08.696 23:15:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58699 /var/tmp/spdk2.sock 00:07:08.696 23:15:00 -- common/autotest_common.sh@640 -- # local es=0 00:07:08.696 23:15:00 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58699 /var/tmp/spdk2.sock 00:07:08.696 23:15:00 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:08.696 23:15:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.696 23:15:00 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:08.697 23:15:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:08.697 23:15:00 -- common/autotest_common.sh@643 -- # waitforlisten 58699 /var/tmp/spdk2.sock 00:07:08.697 23:15:00 -- common/autotest_common.sh@819 -- # '[' -z 58699 ']' 00:07:08.697 23:15:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.697 23:15:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.697 23:15:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.697 23:15:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.697 23:15:00 -- common/autotest_common.sh@10 -- # set +x 00:07:08.697 [2024-07-26 23:15:00.084247] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.697 [2024-07-26 23:15:00.084380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58699 ] 00:07:08.697 [2024-07-26 23:15:00.249369] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58660 has claimed it. 00:07:08.697 [2024-07-26 23:15:00.249453] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.954 ERROR: process (pid: 58699) is no longer running 00:07:08.954 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58699) - No such process 00:07:08.954 23:15:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.954 23:15:00 -- common/autotest_common.sh@852 -- # return 1 00:07:08.954 23:15:00 -- common/autotest_common.sh@643 -- # es=1 00:07:08.954 23:15:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.954 23:15:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:08.954 23:15:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.954 23:15:00 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.954 23:15:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.954 23:15:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.954 23:15:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.954 23:15:00 -- event/cpu_locks.sh@141 -- # killprocess 58660 00:07:08.954 23:15:00 -- common/autotest_common.sh@926 -- # '[' -z 58660 ']' 00:07:08.955 23:15:00 -- common/autotest_common.sh@930 -- # kill -0 58660 00:07:08.955 23:15:00 -- common/autotest_common.sh@931 -- # uname 00:07:08.955 23:15:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:08.955 23:15:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58660 00:07:09.213 23:15:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:09.213 killing process with pid 58660 00:07:09.213 23:15:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:09.213 23:15:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58660' 00:07:09.213 23:15:00 -- common/autotest_common.sh@945 -- # kill 58660 00:07:09.213 23:15:00 -- common/autotest_common.sh@950 -- # wait 58660 00:07:11.743 00:07:11.743 real 0m5.499s 00:07:11.743 user 0m14.437s 00:07:11.743 sys 0m0.807s 00:07:11.743 23:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.743 ************************************ 00:07:11.743 END TEST locking_overlapped_coremask 00:07:11.743 ************************************ 00:07:11.743 23:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:11.743 23:15:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:11.743 23:15:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.743 23:15:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.743 23:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:11.743 ************************************ 00:07:11.743 START TEST locking_overlapped_coremask_via_rpc 00:07:11.743 ************************************ 00:07:11.743 23:15:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:11.743 23:15:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58767 00:07:11.743 23:15:03 -- event/cpu_locks.sh@149 -- # waitforlisten 58767 /var/tmp/spdk.sock 00:07:11.743 23:15:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:11.743 23:15:03 -- common/autotest_common.sh@819 -- # '[' -z 58767 ']' 00:07:11.743 23:15:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.743 23:15:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.743 23:15:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.743 23:15:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.743 23:15:03 -- common/autotest_common.sh@10 -- # set +x 00:07:12.001 [2024-07-26 23:15:03.521934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.001 [2024-07-26 23:15:03.522064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:07:12.001 [2024-07-26 23:15:03.695050] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.001 [2024-07-26 23:15:03.695122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.259 [2024-07-26 23:15:03.979744] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.259 [2024-07-26 23:15:03.980198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.259 [2024-07-26 23:15:03.980339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.259 [2024-07-26 23:15:03.980368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.160 23:15:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.160 23:15:05 -- common/autotest_common.sh@852 -- # return 0 00:07:14.160 23:15:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58800 00:07:14.160 23:15:05 -- event/cpu_locks.sh@153 -- # waitforlisten 58800 /var/tmp/spdk2.sock 00:07:14.160 23:15:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:14.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.160 23:15:05 -- common/autotest_common.sh@819 -- # '[' -z 58800 ']' 00:07:14.160 23:15:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.160 23:15:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:14.160 23:15:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.160 23:15:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:14.160 23:15:05 -- common/autotest_common.sh@10 -- # set +x 00:07:14.160 [2024-07-26 23:15:05.679380] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:14.160 [2024-07-26 23:15:05.679491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58800 ] 00:07:14.160 [2024-07-26 23:15:05.845813] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.160 [2024-07-26 23:15:05.845866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.727 [2024-07-26 23:15:06.388266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:14.727 [2024-07-26 23:15:06.389041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.727 [2024-07-26 23:15:06.389201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.727 [2024-07-26 23:15:06.389236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.039 23:15:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.039 23:15:09 -- common/autotest_common.sh@852 -- # return 0 00:07:18.039 23:15:09 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.039 23:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.039 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 23:15:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.039 23:15:09 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.039 23:15:09 -- common/autotest_common.sh@640 -- # local es=0 00:07:18.039 23:15:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.039 23:15:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:18.039 23:15:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.039 23:15:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:18.039 23:15:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.039 23:15:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.039 23:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.039 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 [2024-07-26 23:15:09.149174] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58767 has claimed it. 00:07:18.039 request: 00:07:18.039 { 00:07:18.039 "method": "framework_enable_cpumask_locks", 00:07:18.039 "req_id": 1 00:07:18.039 } 00:07:18.039 Got JSON-RPC error response 00:07:18.039 response: 00:07:18.039 { 00:07:18.039 "code": -32603, 00:07:18.039 "message": "Failed to claim CPU core: 2" 00:07:18.039 } 00:07:18.039 23:15:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:18.039 23:15:09 -- common/autotest_common.sh@643 -- # es=1 00:07:18.039 23:15:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:18.039 23:15:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:18.039 23:15:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:18.039 23:15:09 -- event/cpu_locks.sh@158 -- # waitforlisten 58767 /var/tmp/spdk.sock 00:07:18.039 23:15:09 -- common/autotest_common.sh@819 -- # '[' -z 58767 ']' 00:07:18.039 23:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.039 23:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.039 23:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.039 23:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.039 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.039 23:15:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.039 23:15:09 -- common/autotest_common.sh@852 -- # return 0 00:07:18.039 23:15:09 -- event/cpu_locks.sh@159 -- # waitforlisten 58800 /var/tmp/spdk2.sock 00:07:18.039 23:15:09 -- common/autotest_common.sh@819 -- # '[' -z 58800 ']' 00:07:18.039 23:15:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.039 23:15:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.039 23:15:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.039 23:15:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.039 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 ************************************ 00:07:18.039 END TEST locking_overlapped_coremask_via_rpc 00:07:18.039 ************************************ 00:07:18.039 23:15:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.039 23:15:09 -- common/autotest_common.sh@852 -- # return 0 00:07:18.039 23:15:09 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:18.039 23:15:09 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.039 23:15:09 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.039 23:15:09 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.039 00:07:18.039 real 0m6.126s 00:07:18.039 user 0m1.717s 00:07:18.039 sys 0m0.335s 00:07:18.039 23:15:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.039 23:15:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.039 23:15:09 -- event/cpu_locks.sh@174 -- # cleanup 00:07:18.039 23:15:09 -- event/cpu_locks.sh@15 -- # [[ -z 58767 ]] 00:07:18.039 23:15:09 -- event/cpu_locks.sh@15 -- # killprocess 58767 00:07:18.039 23:15:09 -- common/autotest_common.sh@926 -- # '[' -z 58767 ']' 00:07:18.039 23:15:09 -- common/autotest_common.sh@930 -- # kill -0 58767 00:07:18.039 23:15:09 -- common/autotest_common.sh@931 -- # uname 00:07:18.039 23:15:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:18.039 23:15:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58767 00:07:18.039 killing process with pid 58767 00:07:18.039 23:15:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:18.039 23:15:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:18.039 23:15:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58767' 00:07:18.039 23:15:09 -- common/autotest_common.sh@945 -- # kill 58767 00:07:18.039 23:15:09 -- common/autotest_common.sh@950 -- # wait 58767 00:07:20.577 23:15:12 -- event/cpu_locks.sh@16 -- # [[ -z 58800 ]] 00:07:20.577 23:15:12 -- event/cpu_locks.sh@16 -- # killprocess 58800 00:07:20.577 23:15:12 -- common/autotest_common.sh@926 -- # '[' -z 58800 ']' 00:07:20.577 23:15:12 -- common/autotest_common.sh@930 -- # kill -0 58800 00:07:20.577 23:15:12 -- common/autotest_common.sh@931 -- # uname 00:07:20.577 23:15:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.577 23:15:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58800 00:07:20.577 killing process with pid 58800 00:07:20.577 23:15:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:20.577 23:15:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:20.577 23:15:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58800' 00:07:20.577 23:15:12 -- common/autotest_common.sh@945 -- # kill 58800 00:07:20.577 23:15:12 -- common/autotest_common.sh@950 -- # wait 58800 00:07:23.111 23:15:14 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.111 Process with pid 58767 is not found 00:07:23.111 Process with pid 58800 is not found 00:07:23.111 23:15:14 -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.111 23:15:14 -- event/cpu_locks.sh@15 -- # [[ -z 58767 ]] 00:07:23.111 23:15:14 -- event/cpu_locks.sh@15 -- # killprocess 58767 00:07:23.111 23:15:14 -- common/autotest_common.sh@926 -- # '[' -z 58767 ']' 00:07:23.111 23:15:14 -- common/autotest_common.sh@930 -- # kill -0 58767 00:07:23.111 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58767) - No such process 00:07:23.111 23:15:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58767 is not found' 00:07:23.111 23:15:14 -- event/cpu_locks.sh@16 -- # [[ -z 58800 ]] 00:07:23.111 23:15:14 -- event/cpu_locks.sh@16 -- # killprocess 58800 00:07:23.111 23:15:14 -- common/autotest_common.sh@926 -- # '[' -z 58800 ']' 00:07:23.111 23:15:14 -- common/autotest_common.sh@930 -- # kill -0 58800 00:07:23.111 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58800) - No such process 00:07:23.111 23:15:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58800 is not found' 00:07:23.112 23:15:14 -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.112 00:07:23.112 real 0m58.608s 00:07:23.112 user 1m40.263s 00:07:23.112 sys 0m9.042s 00:07:23.112 23:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.112 23:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.112 ************************************ 00:07:23.112 END TEST cpu_locks 00:07:23.112 ************************************ 00:07:23.112 00:07:23.112 real 1m31.600s 00:07:23.112 user 2m41.093s 00:07:23.112 sys 0m13.724s 00:07:23.112 23:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.112 23:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.112 ************************************ 00:07:23.112 END TEST event 00:07:23.112 ************************************ 00:07:23.112 23:15:14 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.112 23:15:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.112 23:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.112 23:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.112 ************************************ 00:07:23.112 START TEST thread 00:07:23.112 ************************************ 00:07:23.112 23:15:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.112 * Looking for test storage... 00:07:23.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:23.112 23:15:14 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.112 23:15:14 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:23.112 23:15:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.112 23:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.112 ************************************ 00:07:23.112 START TEST thread_poller_perf 00:07:23.112 ************************************ 00:07:23.112 23:15:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.371 [2024-07-26 23:15:14.892342] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.371 [2024-07-26 23:15:14.892634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:07:23.371 [2024-07-26 23:15:15.066066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.630 [2024-07-26 23:15:15.327445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.630 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:25.009 ====================================== 00:07:25.010 busy:2500300458 (cyc) 00:07:25.010 total_run_count: 398000 00:07:25.010 tsc_hz: 2490000000 (cyc) 00:07:25.010 ====================================== 00:07:25.010 poller_cost: 6282 (cyc), 2522 (nsec) 00:07:25.270 00:07:25.270 real 0m1.943s 00:07:25.270 user 0m1.680s 00:07:25.270 sys 0m0.152s 00:07:25.270 23:15:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.270 23:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.270 ************************************ 00:07:25.270 END TEST thread_poller_perf 00:07:25.270 ************************************ 00:07:25.270 23:15:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.270 23:15:16 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:25.270 23:15:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.270 23:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.270 ************************************ 00:07:25.270 START TEST thread_poller_perf 00:07:25.270 ************************************ 00:07:25.270 23:15:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:25.270 [2024-07-26 23:15:16.914284] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.270 [2024-07-26 23:15:16.914394] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:07:25.530 [2024-07-26 23:15:17.083869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.790 [2024-07-26 23:15:17.346595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.790 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.172 ====================================== 00:07:27.172 busy:2495828876 (cyc) 00:07:27.172 total_run_count: 5457000 00:07:27.172 tsc_hz: 2490000000 (cyc) 00:07:27.172 ====================================== 00:07:27.172 poller_cost: 457 (cyc), 183 (nsec) 00:07:27.172 00:07:27.172 real 0m1.936s 00:07:27.172 user 0m1.674s 00:07:27.172 sys 0m0.153s 00:07:27.172 ************************************ 00:07:27.172 END TEST thread_poller_perf 00:07:27.172 ************************************ 00:07:27.172 23:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.172 23:15:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.172 23:15:18 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.172 ************************************ 00:07:27.172 END TEST thread 00:07:27.172 ************************************ 00:07:27.172 00:07:27.172 real 0m4.163s 00:07:27.172 user 0m3.442s 00:07:27.172 sys 0m0.497s 00:07:27.172 23:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.172 23:15:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.172 23:15:18 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:27.433 23:15:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:27.433 23:15:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.433 23:15:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.433 ************************************ 00:07:27.433 START TEST accel 00:07:27.433 ************************************ 00:07:27.433 23:15:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:27.433 * Looking for test storage... 00:07:27.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:27.433 23:15:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:27.433 23:15:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:27.433 23:15:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.433 23:15:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=59121 00:07:27.433 23:15:19 -- accel/accel.sh@60 -- # waitforlisten 59121 00:07:27.433 23:15:19 -- common/autotest_common.sh@819 -- # '[' -z 59121 ']' 00:07:27.433 23:15:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.433 23:15:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:27.433 23:15:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.433 23:15:19 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:27.433 23:15:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:27.433 23:15:19 -- accel/accel.sh@58 -- # build_accel_config 00:07:27.433 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:07:27.433 23:15:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.433 23:15:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.433 23:15:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.433 23:15:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.433 23:15:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.433 23:15:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.433 23:15:19 -- accel/accel.sh@42 -- # jq -r . 00:07:27.433 [2024-07-26 23:15:19.176583] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.433 [2024-07-26 23:15:19.176776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59121 ] 00:07:27.693 [2024-07-26 23:15:19.351444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.953 [2024-07-26 23:15:19.609156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:27.953 [2024-07-26 23:15:19.609369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.863 23:15:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.863 23:15:21 -- common/autotest_common.sh@852 -- # return 0 00:07:29.863 23:15:21 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:29.863 23:15:21 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:29.863 23:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.863 23:15:21 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:29.863 23:15:21 -- common/autotest_common.sh@10 -- # set +x 00:07:29.863 23:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.863 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.863 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.863 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.864 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.864 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.864 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.864 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.864 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.864 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.864 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.864 23:15:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # IFS== 00:07:29.864 23:15:21 -- accel/accel.sh@64 -- # read -r opc module 00:07:29.864 23:15:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:29.864 23:15:21 -- accel/accel.sh@67 -- # killprocess 59121 00:07:29.864 23:15:21 -- common/autotest_common.sh@926 -- # '[' -z 59121 ']' 00:07:29.864 23:15:21 -- common/autotest_common.sh@930 -- # kill -0 59121 00:07:29.864 23:15:21 -- common/autotest_common.sh@931 -- # uname 00:07:29.864 23:15:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.864 23:15:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59121 00:07:29.864 killing process with pid 59121 00:07:29.864 23:15:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.864 23:15:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.864 23:15:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59121' 00:07:29.864 23:15:21 -- common/autotest_common.sh@945 -- # kill 59121 00:07:29.864 23:15:21 -- common/autotest_common.sh@950 -- # wait 59121 00:07:32.404 23:15:23 -- accel/accel.sh@68 -- # trap - ERR 00:07:32.404 23:15:23 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:32.404 23:15:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:32.404 23:15:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.404 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.404 23:15:23 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:32.404 23:15:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:32.404 23:15:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.404 23:15:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.404 23:15:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.404 23:15:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.404 23:15:23 -- accel/accel.sh@42 -- # jq -r . 00:07:32.404 23:15:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.404 23:15:23 -- common/autotest_common.sh@10 -- # set +x 00:07:32.404 23:15:24 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:32.404 23:15:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:32.404 23:15:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.404 23:15:24 -- common/autotest_common.sh@10 -- # set +x 00:07:32.404 ************************************ 00:07:32.404 START TEST accel_missing_filename 00:07:32.404 ************************************ 00:07:32.404 23:15:24 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:32.404 23:15:24 -- common/autotest_common.sh@640 -- # local es=0 00:07:32.404 23:15:24 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:32.404 23:15:24 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:32.404 23:15:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.404 23:15:24 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:32.404 23:15:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:32.404 23:15:24 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:32.404 23:15:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:32.404 23:15:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.404 23:15:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.404 23:15:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.404 23:15:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.404 23:15:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.404 23:15:24 -- accel/accel.sh@42 -- # jq -r . 00:07:32.404 [2024-07-26 23:15:24.115707] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.404 [2024-07-26 23:15:24.115838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:07:32.664 [2024-07-26 23:15:24.286943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.923 [2024-07-26 23:15:24.544218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.183 [2024-07-26 23:15:24.814850] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.752 [2024-07-26 23:15:25.360465] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:34.321 A filename is required. 00:07:34.321 23:15:25 -- common/autotest_common.sh@643 -- # es=234 00:07:34.321 23:15:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:34.321 23:15:25 -- common/autotest_common.sh@652 -- # es=106 00:07:34.321 23:15:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:34.321 23:15:25 -- common/autotest_common.sh@660 -- # es=1 00:07:34.321 23:15:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:34.321 00:07:34.321 real 0m1.773s 00:07:34.321 user 0m1.445s 00:07:34.321 sys 0m0.262s 00:07:34.321 23:15:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.321 ************************************ 00:07:34.321 END TEST accel_missing_filename 00:07:34.321 ************************************ 00:07:34.321 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:07:34.321 23:15:25 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:34.321 23:15:25 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:34.321 23:15:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.322 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:07:34.322 ************************************ 00:07:34.322 START TEST accel_compress_verify 00:07:34.322 ************************************ 00:07:34.322 23:15:25 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:34.322 23:15:25 -- common/autotest_common.sh@640 -- # local es=0 00:07:34.322 23:15:25 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:34.322 23:15:25 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:34.322 23:15:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:34.322 23:15:25 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:34.322 23:15:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:34.322 23:15:25 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:34.322 23:15:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:34.322 23:15:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.322 23:15:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.322 23:15:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.322 23:15:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.322 23:15:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.322 23:15:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.322 23:15:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.322 23:15:25 -- accel/accel.sh@42 -- # jq -r . 00:07:34.322 [2024-07-26 23:15:25.958674] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.322 [2024-07-26 23:15:25.958981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:07:34.580 [2024-07-26 23:15:26.131740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.839 [2024-07-26 23:15:26.388169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.098 [2024-07-26 23:15:26.659099] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.667 [2024-07-26 23:15:27.206330] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:35.926 00:07:35.926 Compression does not support the verify option, aborting. 00:07:35.926 23:15:27 -- common/autotest_common.sh@643 -- # es=161 00:07:35.926 23:15:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:35.926 23:15:27 -- common/autotest_common.sh@652 -- # es=33 00:07:35.926 ************************************ 00:07:35.926 END TEST accel_compress_verify 00:07:35.926 ************************************ 00:07:35.926 23:15:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:35.926 23:15:27 -- common/autotest_common.sh@660 -- # es=1 00:07:35.926 23:15:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:35.926 00:07:35.926 real 0m1.768s 00:07:35.926 user 0m1.459s 00:07:35.926 sys 0m0.239s 00:07:35.926 23:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.926 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.186 23:15:27 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:36.186 23:15:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:36.186 23:15:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.186 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.186 ************************************ 00:07:36.186 START TEST accel_wrong_workload 00:07:36.186 ************************************ 00:07:36.186 23:15:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:36.186 23:15:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:36.186 23:15:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:36.186 23:15:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:36.186 23:15:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.186 23:15:27 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:36.186 23:15:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.186 23:15:27 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:36.186 23:15:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:36.186 23:15:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.186 23:15:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.186 23:15:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.186 23:15:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.186 23:15:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.186 23:15:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.186 23:15:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.186 23:15:27 -- accel/accel.sh@42 -- # jq -r . 00:07:36.186 Unsupported workload type: foobar 00:07:36.186 [2024-07-26 23:15:27.791676] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:36.186 accel_perf options: 00:07:36.186 [-h help message] 00:07:36.186 [-q queue depth per core] 00:07:36.186 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:36.186 [-T number of threads per core 00:07:36.186 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:36.186 [-t time in seconds] 00:07:36.186 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:36.186 [ dif_verify, , dif_generate, dif_generate_copy 00:07:36.186 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:36.186 [-l for compress/decompress workloads, name of uncompressed input file 00:07:36.186 [-S for crc32c workload, use this seed value (default 0) 00:07:36.186 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:36.186 [-f for fill workload, use this BYTE value (default 255) 00:07:36.186 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:36.186 [-y verify result if this switch is on] 00:07:36.186 [-a tasks to allocate per core (default: same value as -q)] 00:07:36.186 Can be used to spread operations across a wider range of memory. 00:07:36.186 ************************************ 00:07:36.186 END TEST accel_wrong_workload 00:07:36.186 ************************************ 00:07:36.186 23:15:27 -- common/autotest_common.sh@643 -- # es=1 00:07:36.186 23:15:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:36.186 23:15:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:36.186 23:15:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:36.186 00:07:36.186 real 0m0.089s 00:07:36.186 user 0m0.082s 00:07:36.186 sys 0m0.049s 00:07:36.186 23:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.186 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.186 23:15:27 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:36.187 23:15:27 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:36.187 23:15:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.187 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.187 ************************************ 00:07:36.187 START TEST accel_negative_buffers 00:07:36.187 ************************************ 00:07:36.187 23:15:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:36.187 23:15:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:36.187 23:15:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:36.187 23:15:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:36.187 23:15:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.187 23:15:27 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:36.187 23:15:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.187 23:15:27 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:36.187 23:15:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:36.187 23:15:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.187 23:15:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.187 23:15:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.187 23:15:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.187 23:15:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.187 23:15:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.187 23:15:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.187 23:15:27 -- accel/accel.sh@42 -- # jq -r . 00:07:36.447 -x option must be non-negative. 00:07:36.447 [2024-07-26 23:15:27.963839] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:36.447 accel_perf options: 00:07:36.447 [-h help message] 00:07:36.447 [-q queue depth per core] 00:07:36.447 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:36.447 [-T number of threads per core 00:07:36.447 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:36.447 [-t time in seconds] 00:07:36.447 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:36.447 [ dif_verify, , dif_generate, dif_generate_copy 00:07:36.447 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:36.447 [-l for compress/decompress workloads, name of uncompressed input file 00:07:36.447 [-S for crc32c workload, use this seed value (default 0) 00:07:36.447 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:36.447 [-f for fill workload, use this BYTE value (default 255) 00:07:36.447 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:36.447 [-y verify result if this switch is on] 00:07:36.447 [-a tasks to allocate per core (default: same value as -q)] 00:07:36.447 Can be used to spread operations across a wider range of memory. 00:07:36.447 23:15:27 -- common/autotest_common.sh@643 -- # es=1 00:07:36.447 23:15:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:36.447 23:15:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:36.447 23:15:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:36.447 00:07:36.447 real 0m0.096s 00:07:36.447 user 0m0.078s 00:07:36.447 sys 0m0.057s 00:07:36.447 23:15:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.447 23:15:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.447 ************************************ 00:07:36.447 END TEST accel_negative_buffers 00:07:36.447 ************************************ 00:07:36.447 23:15:28 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:36.447 23:15:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:36.447 23:15:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.447 23:15:28 -- common/autotest_common.sh@10 -- # set +x 00:07:36.447 ************************************ 00:07:36.447 START TEST accel_crc32c 00:07:36.447 ************************************ 00:07:36.447 23:15:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:36.447 23:15:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.447 23:15:28 -- accel/accel.sh@17 -- # local accel_module 00:07:36.447 23:15:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:36.447 23:15:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:36.447 23:15:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.447 23:15:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.447 23:15:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.447 23:15:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.447 23:15:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.447 23:15:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.447 23:15:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.447 23:15:28 -- accel/accel.sh@42 -- # jq -r . 00:07:36.447 [2024-07-26 23:15:28.128450] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:36.447 [2024-07-26 23:15:28.128566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ] 00:07:36.707 [2024-07-26 23:15:28.301674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.966 [2024-07-26 23:15:28.560621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.505 23:15:30 -- accel/accel.sh@18 -- # out=' 00:07:39.505 SPDK Configuration: 00:07:39.505 Core mask: 0x1 00:07:39.505 00:07:39.505 Accel Perf Configuration: 00:07:39.505 Workload Type: crc32c 00:07:39.505 CRC-32C seed: 32 00:07:39.505 Transfer size: 4096 bytes 00:07:39.505 Vector count 1 00:07:39.505 Module: software 00:07:39.505 Queue depth: 32 00:07:39.505 Allocate depth: 32 00:07:39.505 # threads/core: 1 00:07:39.505 Run time: 1 seconds 00:07:39.505 Verify: Yes 00:07:39.505 00:07:39.505 Running for 1 seconds... 00:07:39.505 00:07:39.505 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.506 ------------------------------------------------------------------------------------ 00:07:39.506 0,0 563936/s 2202 MiB/s 0 0 00:07:39.506 ==================================================================================== 00:07:39.506 Total 563936/s 2202 MiB/s 0 0' 00:07:39.506 23:15:30 -- accel/accel.sh@20 -- # IFS=: 00:07:39.506 23:15:30 -- accel/accel.sh@20 -- # read -r var val 00:07:39.506 23:15:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:39.506 23:15:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:39.506 23:15:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.506 23:15:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.506 23:15:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.506 23:15:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.506 23:15:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.506 23:15:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.506 23:15:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.506 23:15:30 -- accel/accel.sh@42 -- # jq -r . 00:07:39.506 [2024-07-26 23:15:30.912819] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.506 [2024-07-26 23:15:30.912916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59369 ] 00:07:39.506 [2024-07-26 23:15:31.079682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.765 [2024-07-26 23:15:31.330317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=0x1 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=crc32c 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=32 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=software 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=32 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=32 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=1 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val=Yes 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:40.025 23:15:31 -- accel/accel.sh@21 -- # val= 00:07:40.025 23:15:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # IFS=: 00:07:40.025 23:15:31 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@21 -- # val= 00:07:41.930 23:15:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # IFS=: 00:07:41.930 23:15:33 -- accel/accel.sh@20 -- # read -r var val 00:07:41.930 23:15:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.930 23:15:33 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:41.930 ************************************ 00:07:41.930 END TEST accel_crc32c 00:07:41.930 ************************************ 00:07:41.930 23:15:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.930 00:07:41.930 real 0m5.551s 00:07:41.930 user 0m4.858s 00:07:41.930 sys 0m0.487s 00:07:41.930 23:15:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.930 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:41.930 23:15:33 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:41.930 23:15:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:41.930 23:15:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.930 23:15:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.190 ************************************ 00:07:42.190 START TEST accel_crc32c_C2 00:07:42.190 ************************************ 00:07:42.190 23:15:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:42.190 23:15:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.190 23:15:33 -- accel/accel.sh@17 -- # local accel_module 00:07:42.190 23:15:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:42.190 23:15:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:42.190 23:15:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.190 23:15:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.190 23:15:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.190 23:15:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.190 23:15:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.190 23:15:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.190 23:15:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.190 23:15:33 -- accel/accel.sh@42 -- # jq -r . 00:07:42.190 [2024-07-26 23:15:33.758464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:42.190 [2024-07-26 23:15:33.758729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:07:42.190 [2024-07-26 23:15:33.934407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.449 [2024-07-26 23:15:34.196014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.985 23:15:36 -- accel/accel.sh@18 -- # out=' 00:07:44.985 SPDK Configuration: 00:07:44.985 Core mask: 0x1 00:07:44.985 00:07:44.985 Accel Perf Configuration: 00:07:44.985 Workload Type: crc32c 00:07:44.985 CRC-32C seed: 0 00:07:44.985 Transfer size: 4096 bytes 00:07:44.985 Vector count 2 00:07:44.985 Module: software 00:07:44.985 Queue depth: 32 00:07:44.985 Allocate depth: 32 00:07:44.985 # threads/core: 1 00:07:44.985 Run time: 1 seconds 00:07:44.985 Verify: Yes 00:07:44.985 00:07:44.985 Running for 1 seconds... 00:07:44.985 00:07:44.985 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.985 ------------------------------------------------------------------------------------ 00:07:44.985 0,0 449696/s 3513 MiB/s 0 0 00:07:44.985 ==================================================================================== 00:07:44.985 Total 449696/s 1756 MiB/s 0 0' 00:07:44.985 23:15:36 -- accel/accel.sh@20 -- # IFS=: 00:07:44.985 23:15:36 -- accel/accel.sh@20 -- # read -r var val 00:07:44.985 23:15:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:44.985 23:15:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.985 23:15:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:44.985 23:15:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.985 23:15:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.985 23:15:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.985 23:15:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.985 23:15:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.985 23:15:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.985 23:15:36 -- accel/accel.sh@42 -- # jq -r . 00:07:44.985 [2024-07-26 23:15:36.543290] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:44.985 [2024-07-26 23:15:36.543404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59451 ] 00:07:44.985 [2024-07-26 23:15:36.712417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.244 [2024-07-26 23:15:36.971761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=0x1 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=crc32c 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=0 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=software 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=32 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.503 23:15:37 -- accel/accel.sh@21 -- # val=32 00:07:45.503 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.503 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.762 23:15:37 -- accel/accel.sh@21 -- # val=1 00:07:45.762 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.762 23:15:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.762 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.762 23:15:37 -- accel/accel.sh@21 -- # val=Yes 00:07:45.762 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.762 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.762 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:45.762 23:15:37 -- accel/accel.sh@21 -- # val= 00:07:45.762 23:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # IFS=: 00:07:45.762 23:15:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@21 -- # val= 00:07:47.665 23:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # IFS=: 00:07:47.665 23:15:39 -- accel/accel.sh@20 -- # read -r var val 00:07:47.665 23:15:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.665 23:15:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:47.665 23:15:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.665 00:07:47.665 real 0m5.573s 00:07:47.665 user 0m4.880s 00:07:47.665 sys 0m0.485s 00:07:47.665 ************************************ 00:07:47.665 END TEST accel_crc32c_C2 00:07:47.665 ************************************ 00:07:47.665 23:15:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.665 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:07:47.665 23:15:39 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:47.665 23:15:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:47.665 23:15:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.665 23:15:39 -- common/autotest_common.sh@10 -- # set +x 00:07:47.665 ************************************ 00:07:47.665 START TEST accel_copy 00:07:47.665 ************************************ 00:07:47.665 23:15:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:47.665 23:15:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.665 23:15:39 -- accel/accel.sh@17 -- # local accel_module 00:07:47.666 23:15:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:47.666 23:15:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:47.666 23:15:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.666 23:15:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.666 23:15:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.666 23:15:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.666 23:15:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.666 23:15:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.666 23:15:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.666 23:15:39 -- accel/accel.sh@42 -- # jq -r . 00:07:47.666 [2024-07-26 23:15:39.405153] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:47.666 [2024-07-26 23:15:39.405425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:07:47.925 [2024-07-26 23:15:39.578856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.184 [2024-07-26 23:15:39.844436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.791 23:15:42 -- accel/accel.sh@18 -- # out=' 00:07:50.791 SPDK Configuration: 00:07:50.791 Core mask: 0x1 00:07:50.791 00:07:50.791 Accel Perf Configuration: 00:07:50.791 Workload Type: copy 00:07:50.791 Transfer size: 4096 bytes 00:07:50.791 Vector count 1 00:07:50.791 Module: software 00:07:50.791 Queue depth: 32 00:07:50.791 Allocate depth: 32 00:07:50.791 # threads/core: 1 00:07:50.791 Run time: 1 seconds 00:07:50.791 Verify: Yes 00:07:50.791 00:07:50.791 Running for 1 seconds... 00:07:50.791 00:07:50.791 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.791 ------------------------------------------------------------------------------------ 00:07:50.791 0,0 371488/s 1451 MiB/s 0 0 00:07:50.791 ==================================================================================== 00:07:50.791 Total 371488/s 1451 MiB/s 0 0' 00:07:50.791 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:50.791 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:50.791 23:15:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:50.791 23:15:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:50.791 23:15:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.791 23:15:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.791 23:15:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.791 23:15:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.791 23:15:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.791 23:15:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.791 23:15:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.791 23:15:42 -- accel/accel.sh@42 -- # jq -r . 00:07:50.791 [2024-07-26 23:15:42.176811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:50.791 [2024-07-26 23:15:42.176905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:07:50.791 [2024-07-26 23:15:42.343116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.051 [2024-07-26 23:15:42.598399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val=0x1 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.310 23:15:42 -- accel/accel.sh@21 -- # val=copy 00:07:51.310 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.310 23:15:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.310 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val=software 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val=32 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val=32 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val=1 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val=Yes 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.311 23:15:42 -- accel/accel.sh@21 -- # val= 00:07:51.311 23:15:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.311 23:15:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@21 -- # val= 00:07:53.216 23:15:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # IFS=: 00:07:53.216 23:15:44 -- accel/accel.sh@20 -- # read -r var val 00:07:53.216 23:15:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.216 23:15:44 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:53.216 23:15:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.216 00:07:53.216 real 0m5.525s 00:07:53.216 user 0m4.833s 00:07:53.216 sys 0m0.480s 00:07:53.216 ************************************ 00:07:53.216 END TEST accel_copy 00:07:53.216 ************************************ 00:07:53.216 23:15:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.216 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:07:53.216 23:15:44 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.216 23:15:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:53.216 23:15:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.216 23:15:44 -- common/autotest_common.sh@10 -- # set +x 00:07:53.216 ************************************ 00:07:53.216 START TEST accel_fill 00:07:53.216 ************************************ 00:07:53.216 23:15:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.216 23:15:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.216 23:15:44 -- accel/accel.sh@17 -- # local accel_module 00:07:53.216 23:15:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.217 23:15:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.217 23:15:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.217 23:15:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.217 23:15:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.217 23:15:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.217 23:15:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.217 23:15:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.217 23:15:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.217 23:15:44 -- accel/accel.sh@42 -- # jq -r . 00:07:53.475 [2024-07-26 23:15:45.005260] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:53.475 [2024-07-26 23:15:45.005522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:07:53.475 [2024-07-26 23:15:45.178719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.735 [2024-07-26 23:15:45.432325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.270 23:15:47 -- accel/accel.sh@18 -- # out=' 00:07:56.270 SPDK Configuration: 00:07:56.270 Core mask: 0x1 00:07:56.270 00:07:56.270 Accel Perf Configuration: 00:07:56.270 Workload Type: fill 00:07:56.270 Fill pattern: 0x80 00:07:56.270 Transfer size: 4096 bytes 00:07:56.270 Vector count 1 00:07:56.270 Module: software 00:07:56.270 Queue depth: 64 00:07:56.270 Allocate depth: 64 00:07:56.270 # threads/core: 1 00:07:56.270 Run time: 1 seconds 00:07:56.270 Verify: Yes 00:07:56.270 00:07:56.270 Running for 1 seconds... 00:07:56.270 00:07:56.270 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.270 ------------------------------------------------------------------------------------ 00:07:56.270 0,0 603456/s 2357 MiB/s 0 0 00:07:56.270 ==================================================================================== 00:07:56.270 Total 603456/s 2357 MiB/s 0 0' 00:07:56.270 23:15:47 -- accel/accel.sh@20 -- # IFS=: 00:07:56.270 23:15:47 -- accel/accel.sh@20 -- # read -r var val 00:07:56.270 23:15:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.270 23:15:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:56.270 23:15:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.270 23:15:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.270 23:15:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.270 23:15:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.270 23:15:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.270 23:15:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.270 23:15:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.270 23:15:47 -- accel/accel.sh@42 -- # jq -r . 00:07:56.270 [2024-07-26 23:15:47.762656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:56.270 [2024-07-26 23:15:47.762765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:07:56.270 [2024-07-26 23:15:47.932560] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.530 [2024-07-26 23:15:48.182730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=0x1 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=fill 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=0x80 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=software 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=64 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=64 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=1 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val=Yes 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.789 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.789 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.789 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.790 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.790 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:56.790 23:15:48 -- accel/accel.sh@21 -- # val= 00:07:56.790 23:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.790 23:15:48 -- accel/accel.sh@20 -- # IFS=: 00:07:56.790 23:15:48 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@21 -- # val= 00:07:59.326 23:15:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # IFS=: 00:07:59.326 23:15:50 -- accel/accel.sh@20 -- # read -r var val 00:07:59.326 23:15:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.326 23:15:50 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:59.326 23:15:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.326 00:07:59.326 real 0m5.539s 00:07:59.326 user 0m4.847s 00:07:59.326 sys 0m0.486s 00:07:59.326 23:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.326 ************************************ 00:07:59.326 END TEST accel_fill 00:07:59.326 ************************************ 00:07:59.326 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.327 23:15:50 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:59.327 23:15:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:59.327 23:15:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.327 23:15:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.327 ************************************ 00:07:59.327 START TEST accel_copy_crc32c 00:07:59.327 ************************************ 00:07:59.327 23:15:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:59.327 23:15:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.327 23:15:50 -- accel/accel.sh@17 -- # local accel_module 00:07:59.327 23:15:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:59.327 23:15:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:59.327 23:15:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.327 23:15:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.327 23:15:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.327 23:15:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.327 23:15:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.327 23:15:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.327 23:15:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.327 23:15:50 -- accel/accel.sh@42 -- # jq -r . 00:07:59.327 [2024-07-26 23:15:50.625707] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:59.327 [2024-07-26 23:15:50.625833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59666 ] 00:07:59.327 [2024-07-26 23:15:50.796738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.327 [2024-07-26 23:15:51.051304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.864 23:15:53 -- accel/accel.sh@18 -- # out=' 00:08:01.864 SPDK Configuration: 00:08:01.864 Core mask: 0x1 00:08:01.864 00:08:01.864 Accel Perf Configuration: 00:08:01.864 Workload Type: copy_crc32c 00:08:01.864 CRC-32C seed: 0 00:08:01.864 Vector size: 4096 bytes 00:08:01.864 Transfer size: 4096 bytes 00:08:01.864 Vector count 1 00:08:01.864 Module: software 00:08:01.864 Queue depth: 32 00:08:01.864 Allocate depth: 32 00:08:01.864 # threads/core: 1 00:08:01.864 Run time: 1 seconds 00:08:01.864 Verify: Yes 00:08:01.864 00:08:01.864 Running for 1 seconds... 00:08:01.864 00:08:01.864 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.864 ------------------------------------------------------------------------------------ 00:08:01.864 0,0 294336/s 1149 MiB/s 0 0 00:08:01.864 ==================================================================================== 00:08:01.864 Total 294336/s 1149 MiB/s 0 0' 00:08:01.864 23:15:53 -- accel/accel.sh@20 -- # IFS=: 00:08:01.864 23:15:53 -- accel/accel.sh@20 -- # read -r var val 00:08:01.864 23:15:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:01.864 23:15:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:01.864 23:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.864 23:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.864 23:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.864 23:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.864 23:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.864 23:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.864 23:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.864 23:15:53 -- accel/accel.sh@42 -- # jq -r . 00:08:01.864 [2024-07-26 23:15:53.396116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:01.864 [2024-07-26 23:15:53.396224] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59693 ] 00:08:01.864 [2024-07-26 23:15:53.564180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.123 [2024-07-26 23:15:53.818153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=0x1 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=0 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=software 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=32 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=32 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=1 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val=Yes 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:02.383 23:15:54 -- accel/accel.sh@21 -- # val= 00:08:02.383 23:15:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # IFS=: 00:08:02.383 23:15:54 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@21 -- # val= 00:08:04.916 23:15:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # IFS=: 00:08:04.916 23:15:56 -- accel/accel.sh@20 -- # read -r var val 00:08:04.916 23:15:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.916 23:15:56 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:04.916 23:15:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.916 00:08:04.916 real 0m5.548s 00:08:04.916 user 0m4.864s 00:08:04.916 sys 0m0.481s 00:08:04.916 ************************************ 00:08:04.916 END TEST accel_copy_crc32c 00:08:04.916 ************************************ 00:08:04.916 23:15:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.916 23:15:56 -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 23:15:56 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:04.916 23:15:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:04.916 23:15:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.916 23:15:56 -- common/autotest_common.sh@10 -- # set +x 00:08:04.916 ************************************ 00:08:04.916 START TEST accel_copy_crc32c_C2 00:08:04.916 ************************************ 00:08:04.916 23:15:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:04.916 23:15:56 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.916 23:15:56 -- accel/accel.sh@17 -- # local accel_module 00:08:04.916 23:15:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:04.916 23:15:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:04.916 23:15:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.916 23:15:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.916 23:15:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.916 23:15:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.916 23:15:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.916 23:15:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.916 23:15:56 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.916 23:15:56 -- accel/accel.sh@42 -- # jq -r . 00:08:04.916 [2024-07-26 23:15:56.251867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:04.916 [2024-07-26 23:15:56.252180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:08:04.916 [2024-07-26 23:15:56.423129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.175 [2024-07-26 23:15:56.682983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.708 23:15:58 -- accel/accel.sh@18 -- # out=' 00:08:07.708 SPDK Configuration: 00:08:07.708 Core mask: 0x1 00:08:07.708 00:08:07.708 Accel Perf Configuration: 00:08:07.708 Workload Type: copy_crc32c 00:08:07.708 CRC-32C seed: 0 00:08:07.708 Vector size: 4096 bytes 00:08:07.708 Transfer size: 8192 bytes 00:08:07.708 Vector count 2 00:08:07.708 Module: software 00:08:07.708 Queue depth: 32 00:08:07.708 Allocate depth: 32 00:08:07.708 # threads/core: 1 00:08:07.708 Run time: 1 seconds 00:08:07.708 Verify: Yes 00:08:07.708 00:08:07.708 Running for 1 seconds... 00:08:07.708 00:08:07.708 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.708 ------------------------------------------------------------------------------------ 00:08:07.708 0,0 208096/s 1625 MiB/s 0 0 00:08:07.708 ==================================================================================== 00:08:07.708 Total 208096/s 812 MiB/s 0 0' 00:08:07.708 23:15:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.708 23:15:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:07.708 23:15:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.708 23:15:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:07.708 23:15:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.708 23:15:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.708 23:15:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.708 23:15:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.708 23:15:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.708 23:15:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.708 23:15:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.708 23:15:58 -- accel/accel.sh@42 -- # jq -r . 00:08:07.708 [2024-07-26 23:15:59.021840] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:07.708 [2024-07-26 23:15:59.022109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:08:07.708 [2024-07-26 23:15:59.192806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.708 [2024-07-26 23:15:59.442067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val=0x1 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val=0 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val=software 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.967 23:15:59 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.967 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.967 23:15:59 -- accel/accel.sh@21 -- # val=32 00:08:07.967 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.968 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.968 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.968 23:15:59 -- accel/accel.sh@21 -- # val=32 00:08:07.968 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.968 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:07.968 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:07.968 23:15:59 -- accel/accel.sh@21 -- # val=1 00:08:07.968 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.968 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:08.226 23:15:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:08.226 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:08.226 23:15:59 -- accel/accel.sh@21 -- # val=Yes 00:08:08.226 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:08.226 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:08.226 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:08.226 23:15:59 -- accel/accel.sh@21 -- # val= 00:08:08.226 23:15:59 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # IFS=: 00:08:08.226 23:15:59 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@21 -- # val= 00:08:10.140 23:16:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.140 23:16:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.140 23:16:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.140 23:16:01 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:10.140 23:16:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.140 00:08:10.140 real 0m5.532s 00:08:10.140 user 0m4.859s 00:08:10.140 sys 0m0.467s 00:08:10.140 23:16:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.140 ************************************ 00:08:10.140 END TEST accel_copy_crc32c_C2 00:08:10.140 ************************************ 00:08:10.140 23:16:01 -- common/autotest_common.sh@10 -- # set +x 00:08:10.140 23:16:01 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:10.140 23:16:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:10.140 23:16:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.140 23:16:01 -- common/autotest_common.sh@10 -- # set +x 00:08:10.140 ************************************ 00:08:10.140 START TEST accel_dualcast 00:08:10.140 ************************************ 00:08:10.140 23:16:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:10.140 23:16:01 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.140 23:16:01 -- accel/accel.sh@17 -- # local accel_module 00:08:10.140 23:16:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:10.140 23:16:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:10.140 23:16:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.140 23:16:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.140 23:16:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.140 23:16:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.140 23:16:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.140 23:16:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.140 23:16:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.140 23:16:01 -- accel/accel.sh@42 -- # jq -r . 00:08:10.140 [2024-07-26 23:16:01.856949] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:10.140 [2024-07-26 23:16:01.857075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:08:10.400 [2024-07-26 23:16:02.031380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.659 [2024-07-26 23:16:02.293654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.196 23:16:04 -- accel/accel.sh@18 -- # out=' 00:08:13.196 SPDK Configuration: 00:08:13.196 Core mask: 0x1 00:08:13.196 00:08:13.196 Accel Perf Configuration: 00:08:13.196 Workload Type: dualcast 00:08:13.196 Transfer size: 4096 bytes 00:08:13.196 Vector count 1 00:08:13.196 Module: software 00:08:13.196 Queue depth: 32 00:08:13.196 Allocate depth: 32 00:08:13.196 # threads/core: 1 00:08:13.196 Run time: 1 seconds 00:08:13.196 Verify: Yes 00:08:13.196 00:08:13.196 Running for 1 seconds... 00:08:13.196 00:08:13.196 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:13.196 ------------------------------------------------------------------------------------ 00:08:13.196 0,0 431936/s 1687 MiB/s 0 0 00:08:13.196 ==================================================================================== 00:08:13.196 Total 431936/s 1687 MiB/s 0 0' 00:08:13.196 23:16:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.196 23:16:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.196 23:16:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:13.196 23:16:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:13.196 23:16:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.196 23:16:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.196 23:16:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.196 23:16:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.196 23:16:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.196 23:16:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.196 23:16:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.196 23:16:04 -- accel/accel.sh@42 -- # jq -r . 00:08:13.196 [2024-07-26 23:16:04.643688] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:13.196 [2024-07-26 23:16:04.643787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:08:13.196 [2024-07-26 23:16:04.810696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.455 [2024-07-26 23:16:05.059640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val=0x1 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val=dualcast 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.714 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.714 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.714 23:16:05 -- accel/accel.sh@21 -- # val=software 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@23 -- # accel_module=software 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val=32 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val=32 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val=1 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val=Yes 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:13.715 23:16:05 -- accel/accel.sh@21 -- # val= 00:08:13.715 23:16:05 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # IFS=: 00:08:13.715 23:16:05 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@21 -- # val= 00:08:15.620 23:16:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # IFS=: 00:08:15.620 23:16:07 -- accel/accel.sh@20 -- # read -r var val 00:08:15.620 23:16:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:15.620 23:16:07 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:15.620 23:16:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.620 00:08:15.620 real 0m5.528s 00:08:15.620 user 0m4.845s 00:08:15.620 sys 0m0.478s 00:08:15.620 23:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.620 23:16:07 -- common/autotest_common.sh@10 -- # set +x 00:08:15.620 ************************************ 00:08:15.620 END TEST accel_dualcast 00:08:15.620 ************************************ 00:08:15.880 23:16:07 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:15.880 23:16:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:15.880 23:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.880 23:16:07 -- common/autotest_common.sh@10 -- # set +x 00:08:15.880 ************************************ 00:08:15.880 START TEST accel_compare 00:08:15.880 ************************************ 00:08:15.880 23:16:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:15.880 23:16:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.880 23:16:07 -- accel/accel.sh@17 -- # local accel_module 00:08:15.880 23:16:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:15.880 23:16:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:15.880 23:16:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.880 23:16:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.880 23:16:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.880 23:16:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.880 23:16:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.880 23:16:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.880 23:16:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.880 23:16:07 -- accel/accel.sh@42 -- # jq -r . 00:08:15.880 [2024-07-26 23:16:07.462690] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:15.880 [2024-07-26 23:16:07.462798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:08:16.140 [2024-07-26 23:16:07.635230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.399 [2024-07-26 23:16:07.895855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.937 23:16:10 -- accel/accel.sh@18 -- # out=' 00:08:18.937 SPDK Configuration: 00:08:18.937 Core mask: 0x1 00:08:18.937 00:08:18.937 Accel Perf Configuration: 00:08:18.937 Workload Type: compare 00:08:18.937 Transfer size: 4096 bytes 00:08:18.937 Vector count 1 00:08:18.937 Module: software 00:08:18.937 Queue depth: 32 00:08:18.937 Allocate depth: 32 00:08:18.937 # threads/core: 1 00:08:18.937 Run time: 1 seconds 00:08:18.937 Verify: Yes 00:08:18.937 00:08:18.937 Running for 1 seconds... 00:08:18.937 00:08:18.937 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:18.937 ------------------------------------------------------------------------------------ 00:08:18.937 0,0 567648/s 2217 MiB/s 0 0 00:08:18.937 ==================================================================================== 00:08:18.937 Total 567648/s 2217 MiB/s 0 0' 00:08:18.937 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:18.937 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:18.937 23:16:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:18.937 23:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.937 23:16:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:18.937 23:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.937 23:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.937 23:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.937 23:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.937 23:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.937 23:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.937 23:16:10 -- accel/accel.sh@42 -- # jq -r . 00:08:18.937 [2024-07-26 23:16:10.239142] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:18.937 [2024-07-26 23:16:10.239246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:08:18.937 [2024-07-26 23:16:10.417228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.937 [2024-07-26 23:16:10.671006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val=0x1 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val=compare 00:08:19.197 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.197 23:16:10 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.197 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.197 23:16:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val=software 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@23 -- # accel_module=software 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val=32 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val=32 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val=1 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val=Yes 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.457 23:16:10 -- accel/accel.sh@21 -- # val= 00:08:19.457 23:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.457 23:16:10 -- accel/accel.sh@20 -- # read -r var val 00:08:21.363 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.363 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.364 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.364 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.364 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.364 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@21 -- # val= 00:08:21.364 23:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.364 23:16:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.364 23:16:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:21.364 23:16:12 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:21.364 ************************************ 00:08:21.364 END TEST accel_compare 00:08:21.364 ************************************ 00:08:21.364 23:16:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.364 00:08:21.364 real 0m5.549s 00:08:21.364 user 0m4.849s 00:08:21.364 sys 0m0.493s 00:08:21.364 23:16:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.364 23:16:12 -- common/autotest_common.sh@10 -- # set +x 00:08:21.364 23:16:13 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:21.364 23:16:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:21.364 23:16:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:21.364 23:16:13 -- common/autotest_common.sh@10 -- # set +x 00:08:21.364 ************************************ 00:08:21.364 START TEST accel_xor 00:08:21.364 ************************************ 00:08:21.364 23:16:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:21.364 23:16:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:21.364 23:16:13 -- accel/accel.sh@17 -- # local accel_module 00:08:21.364 23:16:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:21.364 23:16:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:21.364 23:16:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.364 23:16:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.364 23:16:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.364 23:16:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.364 23:16:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.364 23:16:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.364 23:16:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.364 23:16:13 -- accel/accel.sh@42 -- # jq -r . 00:08:21.364 [2024-07-26 23:16:13.088175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:21.364 [2024-07-26 23:16:13.088437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59998 ] 00:08:21.623 [2024-07-26 23:16:13.256820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.881 [2024-07-26 23:16:13.514182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.488 23:16:15 -- accel/accel.sh@18 -- # out=' 00:08:24.488 SPDK Configuration: 00:08:24.488 Core mask: 0x1 00:08:24.488 00:08:24.488 Accel Perf Configuration: 00:08:24.488 Workload Type: xor 00:08:24.488 Source buffers: 2 00:08:24.488 Transfer size: 4096 bytes 00:08:24.488 Vector count 1 00:08:24.488 Module: software 00:08:24.488 Queue depth: 32 00:08:24.488 Allocate depth: 32 00:08:24.488 # threads/core: 1 00:08:24.488 Run time: 1 seconds 00:08:24.488 Verify: Yes 00:08:24.488 00:08:24.488 Running for 1 seconds... 00:08:24.488 00:08:24.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:24.488 ------------------------------------------------------------------------------------ 00:08:24.488 0,0 396704/s 1549 MiB/s 0 0 00:08:24.488 ==================================================================================== 00:08:24.488 Total 396704/s 1549 MiB/s 0 0' 00:08:24.488 23:16:15 -- accel/accel.sh@20 -- # IFS=: 00:08:24.488 23:16:15 -- accel/accel.sh@20 -- # read -r var val 00:08:24.488 23:16:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:24.488 23:16:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:24.488 23:16:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:24.488 23:16:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:24.488 23:16:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.488 23:16:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.488 23:16:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:24.488 23:16:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:24.488 23:16:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:24.488 23:16:15 -- accel/accel.sh@42 -- # jq -r . 00:08:24.488 [2024-07-26 23:16:15.876256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:24.488 [2024-07-26 23:16:15.876353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60024 ] 00:08:24.488 [2024-07-26 23:16:16.044882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.748 [2024-07-26 23:16:16.294787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=0x1 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=xor 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=2 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=software 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@23 -- # accel_module=software 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=32 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=32 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=1 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val=Yes 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.008 23:16:16 -- accel/accel.sh@21 -- # val= 00:08:25.008 23:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.008 23:16:16 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@21 -- # val= 00:08:26.915 23:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # IFS=: 00:08:26.915 23:16:18 -- accel/accel.sh@20 -- # read -r var val 00:08:26.915 23:16:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:26.915 23:16:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:26.915 ************************************ 00:08:26.915 END TEST accel_xor 00:08:26.915 ************************************ 00:08:26.915 23:16:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.915 00:08:26.915 real 0m5.556s 00:08:26.915 user 0m4.875s 00:08:26.915 sys 0m0.472s 00:08:26.915 23:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.915 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:08:26.915 23:16:18 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:26.915 23:16:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:26.915 23:16:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.915 23:16:18 -- common/autotest_common.sh@10 -- # set +x 00:08:26.915 ************************************ 00:08:26.915 START TEST accel_xor 00:08:26.915 ************************************ 00:08:26.915 23:16:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:26.915 23:16:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:26.915 23:16:18 -- accel/accel.sh@17 -- # local accel_module 00:08:26.915 23:16:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:26.915 23:16:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:26.915 23:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:26.915 23:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:26.915 23:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.915 23:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.915 23:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:26.915 23:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:26.915 23:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:26.915 23:16:18 -- accel/accel.sh@42 -- # jq -r . 00:08:27.174 [2024-07-26 23:16:18.720139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:27.174 [2024-07-26 23:16:18.720257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60076 ] 00:08:27.174 [2024-07-26 23:16:18.895243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.434 [2024-07-26 23:16:19.156962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.973 23:16:21 -- accel/accel.sh@18 -- # out=' 00:08:29.973 SPDK Configuration: 00:08:29.973 Core mask: 0x1 00:08:29.973 00:08:29.973 Accel Perf Configuration: 00:08:29.973 Workload Type: xor 00:08:29.973 Source buffers: 3 00:08:29.973 Transfer size: 4096 bytes 00:08:29.973 Vector count 1 00:08:29.973 Module: software 00:08:29.973 Queue depth: 32 00:08:29.973 Allocate depth: 32 00:08:29.973 # threads/core: 1 00:08:29.973 Run time: 1 seconds 00:08:29.973 Verify: Yes 00:08:29.973 00:08:29.973 Running for 1 seconds... 00:08:29.973 00:08:29.973 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:29.973 ------------------------------------------------------------------------------------ 00:08:29.973 0,0 371520/s 1451 MiB/s 0 0 00:08:29.973 ==================================================================================== 00:08:29.973 Total 371520/s 1451 MiB/s 0 0' 00:08:29.973 23:16:21 -- accel/accel.sh@20 -- # IFS=: 00:08:29.973 23:16:21 -- accel/accel.sh@20 -- # read -r var val 00:08:29.973 23:16:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:29.973 23:16:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:29.973 23:16:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:29.973 23:16:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:29.973 23:16:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.973 23:16:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.973 23:16:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:29.973 23:16:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:29.974 23:16:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:29.974 23:16:21 -- accel/accel.sh@42 -- # jq -r . 00:08:29.974 [2024-07-26 23:16:21.507632] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:29.974 [2024-07-26 23:16:21.507740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60113 ] 00:08:29.974 [2024-07-26 23:16:21.681071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.233 [2024-07-26 23:16:21.929940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=0x1 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=xor 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=3 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=software 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@23 -- # accel_module=software 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=32 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=32 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=1 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val=Yes 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:30.493 23:16:22 -- accel/accel.sh@21 -- # val= 00:08:30.493 23:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # IFS=: 00:08:30.493 23:16:22 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@21 -- # val= 00:08:33.030 23:16:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # IFS=: 00:08:33.030 23:16:24 -- accel/accel.sh@20 -- # read -r var val 00:08:33.030 23:16:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:33.030 23:16:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:33.030 23:16:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.030 00:08:33.030 real 0m5.559s 00:08:33.030 user 0m4.847s 00:08:33.030 sys 0m0.503s 00:08:33.030 23:16:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.030 ************************************ 00:08:33.030 END TEST accel_xor 00:08:33.030 ************************************ 00:08:33.030 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:08:33.030 23:16:24 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:33.030 23:16:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:33.030 23:16:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.030 23:16:24 -- common/autotest_common.sh@10 -- # set +x 00:08:33.030 ************************************ 00:08:33.030 START TEST accel_dif_verify 00:08:33.030 ************************************ 00:08:33.030 23:16:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:33.030 23:16:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.030 23:16:24 -- accel/accel.sh@17 -- # local accel_module 00:08:33.030 23:16:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:33.030 23:16:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:33.030 23:16:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.030 23:16:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.030 23:16:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.030 23:16:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.030 23:16:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.030 23:16:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.030 23:16:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.030 23:16:24 -- accel/accel.sh@42 -- # jq -r . 00:08:33.030 [2024-07-26 23:16:24.350511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:33.030 [2024-07-26 23:16:24.350618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:08:33.030 [2024-07-26 23:16:24.520649] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.030 [2024-07-26 23:16:24.777893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.568 23:16:27 -- accel/accel.sh@18 -- # out=' 00:08:35.568 SPDK Configuration: 00:08:35.568 Core mask: 0x1 00:08:35.568 00:08:35.568 Accel Perf Configuration: 00:08:35.568 Workload Type: dif_verify 00:08:35.568 Vector size: 4096 bytes 00:08:35.568 Transfer size: 4096 bytes 00:08:35.568 Block size: 512 bytes 00:08:35.568 Metadata size: 8 bytes 00:08:35.568 Vector count 1 00:08:35.568 Module: software 00:08:35.568 Queue depth: 32 00:08:35.568 Allocate depth: 32 00:08:35.568 # threads/core: 1 00:08:35.568 Run time: 1 seconds 00:08:35.568 Verify: No 00:08:35.568 00:08:35.568 Running for 1 seconds... 00:08:35.568 00:08:35.568 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:35.568 ------------------------------------------------------------------------------------ 00:08:35.568 0,0 128480/s 509 MiB/s 0 0 00:08:35.568 ==================================================================================== 00:08:35.568 Total 128480/s 501 MiB/s 0 0' 00:08:35.568 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:35.568 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:35.568 23:16:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:35.568 23:16:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:35.568 23:16:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:35.568 23:16:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:35.568 23:16:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.568 23:16:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.568 23:16:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:35.568 23:16:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:35.568 23:16:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:35.568 23:16:27 -- accel/accel.sh@42 -- # jq -r . 00:08:35.568 [2024-07-26 23:16:27.060423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:35.568 [2024-07-26 23:16:27.060668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:08:35.568 [2024-07-26 23:16:27.229995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.827 [2024-07-26 23:16:27.442591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=0x1 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=dif_verify 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=software 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@23 -- # accel_module=software 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=32 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=32 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=1 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val=No 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:36.087 23:16:27 -- accel/accel.sh@21 -- # val= 00:08:36.087 23:16:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # IFS=: 00:08:36.087 23:16:27 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@21 -- # val= 00:08:37.994 23:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # IFS=: 00:08:37.994 23:16:29 -- accel/accel.sh@20 -- # read -r var val 00:08:37.994 23:16:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:37.994 23:16:29 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:37.994 ************************************ 00:08:37.994 END TEST accel_dif_verify 00:08:37.994 ************************************ 00:08:37.994 23:16:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.994 00:08:37.994 real 0m5.298s 00:08:37.994 user 0m4.668s 00:08:37.994 sys 0m0.427s 00:08:37.994 23:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.994 23:16:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.994 23:16:29 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:37.994 23:16:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:37.994 23:16:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.994 23:16:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.994 ************************************ 00:08:37.994 START TEST accel_dif_generate 00:08:37.994 ************************************ 00:08:37.994 23:16:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:37.994 23:16:29 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.994 23:16:29 -- accel/accel.sh@17 -- # local accel_module 00:08:37.994 23:16:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:37.994 23:16:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:37.994 23:16:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.994 23:16:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.994 23:16:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.995 23:16:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.995 23:16:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.995 23:16:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.995 23:16:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.995 23:16:29 -- accel/accel.sh@42 -- # jq -r . 00:08:37.995 [2024-07-26 23:16:29.727475] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:37.995 [2024-07-26 23:16:29.727753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60238 ] 00:08:38.254 [2024-07-26 23:16:29.900542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.513 [2024-07-26 23:16:30.119020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.051 23:16:32 -- accel/accel.sh@18 -- # out=' 00:08:41.051 SPDK Configuration: 00:08:41.051 Core mask: 0x1 00:08:41.051 00:08:41.051 Accel Perf Configuration: 00:08:41.051 Workload Type: dif_generate 00:08:41.051 Vector size: 4096 bytes 00:08:41.051 Transfer size: 4096 bytes 00:08:41.051 Block size: 512 bytes 00:08:41.051 Metadata size: 8 bytes 00:08:41.051 Vector count 1 00:08:41.051 Module: software 00:08:41.051 Queue depth: 32 00:08:41.051 Allocate depth: 32 00:08:41.051 # threads/core: 1 00:08:41.051 Run time: 1 seconds 00:08:41.051 Verify: No 00:08:41.051 00:08:41.051 Running for 1 seconds... 00:08:41.051 00:08:41.051 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.051 ------------------------------------------------------------------------------------ 00:08:41.051 0,0 153408/s 608 MiB/s 0 0 00:08:41.051 ==================================================================================== 00:08:41.051 Total 153408/s 599 MiB/s 0 0' 00:08:41.051 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.051 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.051 23:16:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:41.051 23:16:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.051 23:16:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:41.051 23:16:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.051 23:16:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.051 23:16:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.051 23:16:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.051 23:16:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.051 23:16:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.051 23:16:32 -- accel/accel.sh@42 -- # jq -r . 00:08:41.051 [2024-07-26 23:16:32.339052] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:41.051 [2024-07-26 23:16:32.339171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ] 00:08:41.051 [2024-07-26 23:16:32.510133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.051 [2024-07-26 23:16:32.723364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=0x1 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=dif_generate 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=software 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@23 -- # accel_module=software 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=32 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=32 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=1 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val=No 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.310 23:16:32 -- accel/accel.sh@21 -- # val= 00:08:41.310 23:16:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.310 23:16:32 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@21 -- # val= 00:08:43.216 23:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # IFS=: 00:08:43.216 23:16:34 -- accel/accel.sh@20 -- # read -r var val 00:08:43.216 23:16:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:43.216 23:16:34 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:43.217 ************************************ 00:08:43.217 END TEST accel_dif_generate 00:08:43.217 ************************************ 00:08:43.217 23:16:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.217 00:08:43.217 real 0m5.201s 00:08:43.217 user 0m4.604s 00:08:43.217 sys 0m0.393s 00:08:43.217 23:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.217 23:16:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.217 23:16:34 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:43.217 23:16:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:43.217 23:16:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.217 23:16:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.217 ************************************ 00:08:43.217 START TEST accel_dif_generate_copy 00:08:43.217 ************************************ 00:08:43.217 23:16:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:43.217 23:16:34 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.217 23:16:34 -- accel/accel.sh@17 -- # local accel_module 00:08:43.217 23:16:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:43.217 23:16:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:43.217 23:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.217 23:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.217 23:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.217 23:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.217 23:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.217 23:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.217 23:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.217 23:16:34 -- accel/accel.sh@42 -- # jq -r . 00:08:43.476 [2024-07-26 23:16:35.003051] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:43.476 [2024-07-26 23:16:35.003157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:08:43.476 [2024-07-26 23:16:35.173294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.735 [2024-07-26 23:16:35.390353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.270 23:16:37 -- accel/accel.sh@18 -- # out=' 00:08:46.270 SPDK Configuration: 00:08:46.270 Core mask: 0x1 00:08:46.270 00:08:46.270 Accel Perf Configuration: 00:08:46.270 Workload Type: dif_generate_copy 00:08:46.270 Vector size: 4096 bytes 00:08:46.270 Transfer size: 4096 bytes 00:08:46.270 Vector count 1 00:08:46.270 Module: software 00:08:46.270 Queue depth: 32 00:08:46.270 Allocate depth: 32 00:08:46.270 # threads/core: 1 00:08:46.270 Run time: 1 seconds 00:08:46.270 Verify: No 00:08:46.270 00:08:46.270 Running for 1 seconds... 00:08:46.270 00:08:46.270 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:46.270 ------------------------------------------------------------------------------------ 00:08:46.270 0,0 119232/s 473 MiB/s 0 0 00:08:46.270 ==================================================================================== 00:08:46.270 Total 119232/s 465 MiB/s 0 0' 00:08:46.270 23:16:37 -- accel/accel.sh@20 -- # IFS=: 00:08:46.270 23:16:37 -- accel/accel.sh@20 -- # read -r var val 00:08:46.270 23:16:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:46.270 23:16:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:46.270 23:16:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:46.270 23:16:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:46.270 23:16:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.270 23:16:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.270 23:16:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:46.270 23:16:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:46.270 23:16:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:46.270 23:16:37 -- accel/accel.sh@42 -- # jq -r . 00:08:46.270 [2024-07-26 23:16:37.566758] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:46.270 [2024-07-26 23:16:37.566871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60352 ] 00:08:46.270 [2024-07-26 23:16:37.736384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.270 [2024-07-26 23:16:37.944591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.529 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.529 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.529 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.529 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.529 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.529 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.529 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.529 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.529 23:16:38 -- accel/accel.sh@21 -- # val=0x1 00:08:46.529 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.529 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=software 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=32 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=32 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=1 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val=No 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:46.530 23:16:38 -- accel/accel.sh@21 -- # val= 00:08:46.530 23:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # IFS=: 00:08:46.530 23:16:38 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.437 23:16:40 -- accel/accel.sh@21 -- # val= 00:08:48.437 23:16:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # IFS=: 00:08:48.437 23:16:40 -- accel/accel.sh@20 -- # read -r var val 00:08:48.696 23:16:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:48.696 23:16:40 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:48.696 23:16:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.696 00:08:48.696 real 0m5.247s 00:08:48.696 user 0m4.678s 00:08:48.696 sys 0m0.364s 00:08:48.696 23:16:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.696 ************************************ 00:08:48.696 END TEST accel_dif_generate_copy 00:08:48.696 ************************************ 00:08:48.696 23:16:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 23:16:40 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:48.696 23:16:40 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.696 23:16:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:48.696 23:16:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:48.696 23:16:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.696 ************************************ 00:08:48.696 START TEST accel_comp 00:08:48.696 ************************************ 00:08:48.696 23:16:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.696 23:16:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:48.696 23:16:40 -- accel/accel.sh@17 -- # local accel_module 00:08:48.696 23:16:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.696 23:16:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.696 23:16:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.696 23:16:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.696 23:16:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.696 23:16:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.696 23:16:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.696 23:16:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.696 23:16:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.696 23:16:40 -- accel/accel.sh@42 -- # jq -r . 00:08:48.696 [2024-07-26 23:16:40.327547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:48.696 [2024-07-26 23:16:40.327854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60394 ] 00:08:48.954 [2024-07-26 23:16:40.502741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.212 [2024-07-26 23:16:40.767797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.742 23:16:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:51.742 00:08:51.742 SPDK Configuration: 00:08:51.742 Core mask: 0x1 00:08:51.742 00:08:51.742 Accel Perf Configuration: 00:08:51.742 Workload Type: compress 00:08:51.742 Transfer size: 4096 bytes 00:08:51.742 Vector count 1 00:08:51.742 Module: software 00:08:51.742 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:51.742 Queue depth: 32 00:08:51.742 Allocate depth: 32 00:08:51.742 # threads/core: 1 00:08:51.742 Run time: 1 seconds 00:08:51.742 Verify: No 00:08:51.742 00:08:51.742 Running for 1 seconds... 00:08:51.742 00:08:51.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:51.742 ------------------------------------------------------------------------------------ 00:08:51.742 0,0 57856/s 241 MiB/s 0 0 00:08:51.742 ==================================================================================== 00:08:51.742 Total 57856/s 226 MiB/s 0 0' 00:08:51.742 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:51.742 23:16:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:51.742 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:51.742 23:16:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:51.742 23:16:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:51.742 23:16:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:51.742 23:16:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.742 23:16:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.742 23:16:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:51.742 23:16:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:51.742 23:16:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:51.742 23:16:43 -- accel/accel.sh@42 -- # jq -r . 00:08:51.742 [2024-07-26 23:16:43.105720] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:51.742 [2024-07-26 23:16:43.105831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60431 ] 00:08:51.742 [2024-07-26 23:16:43.277332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.000 [2024-07-26 23:16:43.530121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val=0x1 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val=compress 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.258 23:16:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:52.258 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.258 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=software 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@23 -- # accel_module=software 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=32 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=32 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=1 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val=No 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:52.259 23:16:43 -- accel/accel.sh@21 -- # val= 00:08:52.259 23:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # IFS=: 00:08:52.259 23:16:43 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 23:16:45 -- accel/accel.sh@21 -- # val= 00:08:54.194 23:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # IFS=: 00:08:54.194 23:16:45 -- accel/accel.sh@20 -- # read -r var val 00:08:54.194 ************************************ 00:08:54.194 END TEST accel_comp 00:08:54.194 ************************************ 00:08:54.194 23:16:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:54.194 23:16:45 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:54.194 23:16:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:54.194 00:08:54.194 real 0m5.568s 00:08:54.194 user 0m4.878s 00:08:54.194 sys 0m0.482s 00:08:54.194 23:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.194 23:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:54.194 23:16:45 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:54.194 23:16:45 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:54.194 23:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.194 23:16:45 -- common/autotest_common.sh@10 -- # set +x 00:08:54.194 ************************************ 00:08:54.194 START TEST accel_decomp 00:08:54.194 ************************************ 00:08:54.194 23:16:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:54.194 23:16:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:54.194 23:16:45 -- accel/accel.sh@17 -- # local accel_module 00:08:54.194 23:16:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:54.194 23:16:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:54.194 23:16:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.194 23:16:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:54.194 23:16:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.194 23:16:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.194 23:16:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:54.195 23:16:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:54.195 23:16:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:54.195 23:16:45 -- accel/accel.sh@42 -- # jq -r . 00:08:54.454 [2024-07-26 23:16:45.969588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:54.454 [2024-07-26 23:16:45.969838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:08:54.454 [2024-07-26 23:16:46.140711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.713 [2024-07-26 23:16:46.397591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.246 23:16:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:57.246 00:08:57.246 SPDK Configuration: 00:08:57.246 Core mask: 0x1 00:08:57.246 00:08:57.246 Accel Perf Configuration: 00:08:57.246 Workload Type: decompress 00:08:57.246 Transfer size: 4096 bytes 00:08:57.246 Vector count 1 00:08:57.246 Module: software 00:08:57.246 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:57.246 Queue depth: 32 00:08:57.246 Allocate depth: 32 00:08:57.246 # threads/core: 1 00:08:57.246 Run time: 1 seconds 00:08:57.246 Verify: Yes 00:08:57.246 00:08:57.246 Running for 1 seconds... 00:08:57.246 00:08:57.246 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:57.246 ------------------------------------------------------------------------------------ 00:08:57.246 0,0 69088/s 127 MiB/s 0 0 00:08:57.247 ==================================================================================== 00:08:57.247 Total 69088/s 269 MiB/s 0 0' 00:08:57.247 23:16:48 -- accel/accel.sh@20 -- # IFS=: 00:08:57.247 23:16:48 -- accel/accel.sh@20 -- # read -r var val 00:08:57.247 23:16:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:57.247 23:16:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:57.247 23:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.247 23:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:57.247 23:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.247 23:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.247 23:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:57.247 23:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:57.247 23:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:08:57.247 23:16:48 -- accel/accel.sh@42 -- # jq -r . 00:08:57.247 [2024-07-26 23:16:48.746837] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:57.247 [2024-07-26 23:16:48.747095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:08:57.247 [2024-07-26 23:16:48.916393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.505 [2024-07-26 23:16:49.168197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=0x1 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=decompress 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=software 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@23 -- # accel_module=software 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=32 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=32 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=1 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val=Yes 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:08:57.765 23:16:49 -- accel/accel.sh@21 -- # val= 00:08:57.765 23:16:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # IFS=: 00:08:57.765 23:16:49 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@21 -- # val= 00:09:00.299 23:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # IFS=: 00:09:00.299 23:16:51 -- accel/accel.sh@20 -- # read -r var val 00:09:00.299 23:16:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:00.299 23:16:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:00.299 23:16:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.299 00:09:00.299 real 0m5.569s 00:09:00.299 user 0m4.880s 00:09:00.299 sys 0m0.483s 00:09:00.299 23:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.299 ************************************ 00:09:00.299 END TEST accel_decomp 00:09:00.299 ************************************ 00:09:00.299 23:16:51 -- common/autotest_common.sh@10 -- # set +x 00:09:00.299 23:16:51 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:00.299 23:16:51 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:00.299 23:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.299 23:16:51 -- common/autotest_common.sh@10 -- # set +x 00:09:00.299 ************************************ 00:09:00.299 START TEST accel_decmop_full 00:09:00.299 ************************************ 00:09:00.299 23:16:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:00.299 23:16:51 -- accel/accel.sh@16 -- # local accel_opc 00:09:00.299 23:16:51 -- accel/accel.sh@17 -- # local accel_module 00:09:00.299 23:16:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:00.299 23:16:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:00.299 23:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:09:00.299 23:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:00.300 23:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.300 23:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.300 23:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:00.300 23:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:00.300 23:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:09:00.300 23:16:51 -- accel/accel.sh@42 -- # jq -r . 00:09:00.300 [2024-07-26 23:16:51.612444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:00.300 [2024-07-26 23:16:51.612554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60561 ] 00:09:00.300 [2024-07-26 23:16:51.785894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.300 [2024-07-26 23:16:52.042763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.847 23:16:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:02.847 00:09:02.847 SPDK Configuration: 00:09:02.847 Core mask: 0x1 00:09:02.847 00:09:02.847 Accel Perf Configuration: 00:09:02.847 Workload Type: decompress 00:09:02.847 Transfer size: 111250 bytes 00:09:02.847 Vector count 1 00:09:02.847 Module: software 00:09:02.847 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:02.847 Queue depth: 32 00:09:02.847 Allocate depth: 32 00:09:02.847 # threads/core: 1 00:09:02.847 Run time: 1 seconds 00:09:02.847 Verify: Yes 00:09:02.847 00:09:02.847 Running for 1 seconds... 00:09:02.847 00:09:02.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:02.847 ------------------------------------------------------------------------------------ 00:09:02.847 0,0 4864/s 200 MiB/s 0 0 00:09:02.847 ==================================================================================== 00:09:02.847 Total 4864/s 516 MiB/s 0 0' 00:09:02.847 23:16:54 -- accel/accel.sh@20 -- # IFS=: 00:09:02.847 23:16:54 -- accel/accel.sh@20 -- # read -r var val 00:09:02.847 23:16:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:02.847 23:16:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:09:02.847 23:16:54 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.847 23:16:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.847 23:16:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.847 23:16:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.847 23:16:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.847 23:16:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.847 23:16:54 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.847 23:16:54 -- accel/accel.sh@42 -- # jq -r . 00:09:02.847 [2024-07-26 23:16:54.405737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:02.847 [2024-07-26 23:16:54.405835] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60594 ] 00:09:02.847 [2024-07-26 23:16:54.572458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.106 [2024-07-26 23:16:54.832198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val=0x1 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val=decompress 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val=software 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@23 -- # accel_module=software 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.366 23:16:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.366 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.366 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val=32 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val=32 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val=1 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val=Yes 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:03.625 23:16:55 -- accel/accel.sh@21 -- # val= 00:09:03.625 23:16:55 -- accel/accel.sh@22 -- # case "$var" in 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # IFS=: 00:09:03.625 23:16:55 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@21 -- # val= 00:09:05.532 23:16:57 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # IFS=: 00:09:05.532 23:16:57 -- accel/accel.sh@20 -- # read -r var val 00:09:05.532 23:16:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:05.532 23:16:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:05.532 ************************************ 00:09:05.532 END TEST accel_decmop_full 00:09:05.532 ************************************ 00:09:05.532 23:16:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:05.532 00:09:05.532 real 0m5.587s 00:09:05.532 user 0m4.902s 00:09:05.532 sys 0m0.477s 00:09:05.532 23:16:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.532 23:16:57 -- common/autotest_common.sh@10 -- # set +x 00:09:05.532 23:16:57 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.532 23:16:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:05.532 23:16:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.532 23:16:57 -- common/autotest_common.sh@10 -- # set +x 00:09:05.532 ************************************ 00:09:05.532 START TEST accel_decomp_mcore 00:09:05.532 ************************************ 00:09:05.532 23:16:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.532 23:16:57 -- accel/accel.sh@16 -- # local accel_opc 00:09:05.532 23:16:57 -- accel/accel.sh@17 -- # local accel_module 00:09:05.532 23:16:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.532 23:16:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.532 23:16:57 -- accel/accel.sh@12 -- # build_accel_config 00:09:05.532 23:16:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:05.532 23:16:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.532 23:16:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.532 23:16:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:05.532 23:16:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:05.532 23:16:57 -- accel/accel.sh@41 -- # local IFS=, 00:09:05.532 23:16:57 -- accel/accel.sh@42 -- # jq -r . 00:09:05.532 [2024-07-26 23:16:57.278662] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:05.532 [2024-07-26 23:16:57.279094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:09:05.792 [2024-07-26 23:16:57.450784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.051 [2024-07-26 23:16:57.717190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.051 [2024-07-26 23:16:57.717542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.051 [2024-07-26 23:16:57.717510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.051 [2024-07-26 23:16:57.717394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.587 23:17:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:08.587 00:09:08.587 SPDK Configuration: 00:09:08.587 Core mask: 0xf 00:09:08.587 00:09:08.587 Accel Perf Configuration: 00:09:08.587 Workload Type: decompress 00:09:08.587 Transfer size: 4096 bytes 00:09:08.587 Vector count 1 00:09:08.587 Module: software 00:09:08.587 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:08.587 Queue depth: 32 00:09:08.587 Allocate depth: 32 00:09:08.587 # threads/core: 1 00:09:08.587 Run time: 1 seconds 00:09:08.587 Verify: Yes 00:09:08.587 00:09:08.587 Running for 1 seconds... 00:09:08.587 00:09:08.587 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:08.587 ------------------------------------------------------------------------------------ 00:09:08.587 0,0 49920/s 92 MiB/s 0 0 00:09:08.587 3,0 49376/s 90 MiB/s 0 0 00:09:08.587 2,0 50368/s 92 MiB/s 0 0 00:09:08.587 1,0 49888/s 91 MiB/s 0 0 00:09:08.587 ==================================================================================== 00:09:08.587 Total 199552/s 779 MiB/s 0 0' 00:09:08.587 23:17:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:08.587 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:08.587 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:08.588 23:17:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:08.588 23:17:00 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.588 23:17:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.588 23:17:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.588 23:17:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.588 23:17:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.588 23:17:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.588 23:17:00 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.588 23:17:00 -- accel/accel.sh@42 -- # jq -r . 00:09:08.588 [2024-07-26 23:17:00.121617] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:08.588 [2024-07-26 23:17:00.121726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:09:08.588 [2024-07-26 23:17:00.296106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.847 [2024-07-26 23:17:00.556087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.847 [2024-07-26 23:17:00.556280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.847 [2024-07-26 23:17:00.556461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.847 [2024-07-26 23:17:00.556510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=0xf 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=decompress 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=software 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@23 -- # accel_module=software 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=32 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=32 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=1 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val=Yes 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:09.107 23:17:00 -- accel/accel.sh@21 -- # val= 00:09:09.107 23:17:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # IFS=: 00:09:09.107 23:17:00 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@21 -- # val= 00:09:11.644 23:17:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # IFS=: 00:09:11.644 23:17:02 -- accel/accel.sh@20 -- # read -r var val 00:09:11.644 23:17:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:11.644 23:17:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:11.644 23:17:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:11.644 00:09:11.644 real 0m5.681s 00:09:11.644 user 0m8.047s 00:09:11.644 sys 0m0.281s 00:09:11.644 ************************************ 00:09:11.644 END TEST accel_decomp_mcore 00:09:11.644 ************************************ 00:09:11.644 23:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.644 23:17:02 -- common/autotest_common.sh@10 -- # set +x 00:09:11.644 23:17:02 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.644 23:17:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:11.644 23:17:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.644 23:17:02 -- common/autotest_common.sh@10 -- # set +x 00:09:11.644 ************************************ 00:09:11.644 START TEST accel_decomp_full_mcore 00:09:11.644 ************************************ 00:09:11.644 23:17:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.644 23:17:02 -- accel/accel.sh@16 -- # local accel_opc 00:09:11.644 23:17:02 -- accel/accel.sh@17 -- # local accel_module 00:09:11.644 23:17:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.644 23:17:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.644 23:17:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.644 23:17:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:11.644 23:17:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.644 23:17:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.644 23:17:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:11.644 23:17:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:11.644 23:17:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:11.644 23:17:02 -- accel/accel.sh@42 -- # jq -r . 00:09:11.644 [2024-07-26 23:17:03.034736] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:11.644 [2024-07-26 23:17:03.034846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60730 ] 00:09:11.644 [2024-07-26 23:17:03.210522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.902 [2024-07-26 23:17:03.485224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.902 [2024-07-26 23:17:03.485423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.902 [2024-07-26 23:17:03.485589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.902 [2024-07-26 23:17:03.485648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.434 23:17:05 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:14.434 00:09:14.434 SPDK Configuration: 00:09:14.434 Core mask: 0xf 00:09:14.434 00:09:14.434 Accel Perf Configuration: 00:09:14.434 Workload Type: decompress 00:09:14.435 Transfer size: 111250 bytes 00:09:14.435 Vector count 1 00:09:14.435 Module: software 00:09:14.435 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:14.435 Queue depth: 32 00:09:14.435 Allocate depth: 32 00:09:14.435 # threads/core: 1 00:09:14.435 Run time: 1 seconds 00:09:14.435 Verify: Yes 00:09:14.435 00:09:14.435 Running for 1 seconds... 00:09:14.435 00:09:14.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:14.435 ------------------------------------------------------------------------------------ 00:09:14.435 0,0 4704/s 194 MiB/s 0 0 00:09:14.435 3,0 4896/s 202 MiB/s 0 0 00:09:14.435 2,0 5056/s 208 MiB/s 0 0 00:09:14.435 1,0 4896/s 202 MiB/s 0 0 00:09:14.435 ==================================================================================== 00:09:14.435 Total 19552/s 2074 MiB/s 0 0' 00:09:14.435 23:17:05 -- accel/accel.sh@20 -- # IFS=: 00:09:14.435 23:17:05 -- accel/accel.sh@20 -- # read -r var val 00:09:14.435 23:17:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:14.435 23:17:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:14.435 23:17:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:14.435 23:17:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:14.435 23:17:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:14.435 23:17:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:14.435 23:17:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:14.435 23:17:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:14.435 23:17:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:14.435 23:17:05 -- accel/accel.sh@42 -- # jq -r . 00:09:14.435 [2024-07-26 23:17:05.918037] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:14.435 [2024-07-26 23:17:05.918148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60770 ] 00:09:14.435 [2024-07-26 23:17:06.092743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.694 [2024-07-26 23:17:06.357673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.694 [2024-07-26 23:17:06.357874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.694 [2024-07-26 23:17:06.358064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.694 [2024-07-26 23:17:06.358099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=0xf 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=decompress 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=software 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@23 -- # accel_module=software 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=32 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=32 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=1 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val=Yes 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:14.954 23:17:06 -- accel/accel.sh@21 -- # val= 00:09:14.954 23:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # IFS=: 00:09:14.954 23:17:06 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@21 -- # val= 00:09:17.486 23:17:08 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # IFS=: 00:09:17.486 23:17:08 -- accel/accel.sh@20 -- # read -r var val 00:09:17.486 23:17:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:17.486 23:17:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:17.487 23:17:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:17.487 00:09:17.487 real 0m5.754s 00:09:17.487 user 0m16.299s 00:09:17.487 sys 0m0.586s 00:09:17.487 23:17:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.487 ************************************ 00:09:17.487 END TEST accel_decomp_full_mcore 00:09:17.487 ************************************ 00:09:17.487 23:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:17.487 23:17:08 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:17.487 23:17:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:17.487 23:17:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.487 23:17:08 -- common/autotest_common.sh@10 -- # set +x 00:09:17.487 ************************************ 00:09:17.487 START TEST accel_decomp_mthread 00:09:17.487 ************************************ 00:09:17.487 23:17:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:17.487 23:17:08 -- accel/accel.sh@16 -- # local accel_opc 00:09:17.487 23:17:08 -- accel/accel.sh@17 -- # local accel_module 00:09:17.487 23:17:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:17.487 23:17:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:17.487 23:17:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:17.487 23:17:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:17.487 23:17:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:17.487 23:17:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:17.487 23:17:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:17.487 23:17:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:17.487 23:17:08 -- accel/accel.sh@41 -- # local IFS=, 00:09:17.487 23:17:08 -- accel/accel.sh@42 -- # jq -r . 00:09:17.487 [2024-07-26 23:17:08.860412] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:17.487 [2024-07-26 23:17:08.860539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:09:17.487 [2024-07-26 23:17:09.036141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.746 [2024-07-26 23:17:09.297166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.280 23:17:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:20.280 00:09:20.280 SPDK Configuration: 00:09:20.280 Core mask: 0x1 00:09:20.280 00:09:20.280 Accel Perf Configuration: 00:09:20.280 Workload Type: decompress 00:09:20.280 Transfer size: 4096 bytes 00:09:20.280 Vector count 1 00:09:20.280 Module: software 00:09:20.280 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:20.280 Queue depth: 32 00:09:20.280 Allocate depth: 32 00:09:20.280 # threads/core: 2 00:09:20.280 Run time: 1 seconds 00:09:20.280 Verify: Yes 00:09:20.280 00:09:20.280 Running for 1 seconds... 00:09:20.280 00:09:20.280 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:20.280 ------------------------------------------------------------------------------------ 00:09:20.280 0,1 35008/s 64 MiB/s 0 0 00:09:20.280 0,0 34880/s 64 MiB/s 0 0 00:09:20.280 ==================================================================================== 00:09:20.280 Total 69888/s 273 MiB/s 0 0' 00:09:20.280 23:17:11 -- accel/accel.sh@20 -- # IFS=: 00:09:20.280 23:17:11 -- accel/accel.sh@20 -- # read -r var val 00:09:20.280 23:17:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.280 23:17:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:20.280 23:17:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:20.280 23:17:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:20.280 23:17:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.280 23:17:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.280 23:17:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:20.280 23:17:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:20.280 23:17:11 -- accel/accel.sh@41 -- # local IFS=, 00:09:20.280 23:17:11 -- accel/accel.sh@42 -- # jq -r . 00:09:20.280 [2024-07-26 23:17:11.653724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:20.280 [2024-07-26 23:17:11.653842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:09:20.280 [2024-07-26 23:17:11.822154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.539 [2024-07-26 23:17:12.077632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val=0x1 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val=decompress 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val=software 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.798 23:17:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:20.798 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.798 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val=32 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val=32 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val=2 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val=Yes 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:20.799 23:17:12 -- accel/accel.sh@21 -- # val= 00:09:20.799 23:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # IFS=: 00:09:20.799 23:17:12 -- accel/accel.sh@20 -- # read -r var val 00:09:22.703 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.703 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.703 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.703 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.703 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.703 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.704 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.704 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.704 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.704 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@21 -- # val= 00:09:22.704 23:17:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # IFS=: 00:09:22.704 23:17:14 -- accel/accel.sh@20 -- # read -r var val 00:09:22.704 23:17:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:22.704 23:17:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:22.704 23:17:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:22.704 00:09:22.704 real 0m5.569s 00:09:22.704 user 0m4.890s 00:09:22.704 sys 0m0.475s 00:09:22.704 23:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.704 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:09:22.704 ************************************ 00:09:22.704 END TEST accel_decomp_mthread 00:09:22.704 ************************************ 00:09:22.704 23:17:14 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.704 23:17:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:22.704 23:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.704 23:17:14 -- common/autotest_common.sh@10 -- # set +x 00:09:22.704 ************************************ 00:09:22.704 START TEST accel_deomp_full_mthread 00:09:22.704 ************************************ 00:09:22.704 23:17:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.704 23:17:14 -- accel/accel.sh@16 -- # local accel_opc 00:09:22.704 23:17:14 -- accel/accel.sh@17 -- # local accel_module 00:09:22.704 23:17:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.704 23:17:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.704 23:17:14 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.704 23:17:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.704 23:17:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.704 23:17:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.704 23:17:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.704 23:17:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.704 23:17:14 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.704 23:17:14 -- accel/accel.sh@42 -- # jq -r . 00:09:22.963 [2024-07-26 23:17:14.499914] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:22.963 [2024-07-26 23:17:14.500043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:09:22.963 [2024-07-26 23:17:14.668689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.233 [2024-07-26 23:17:14.920009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.843 23:17:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:25.843 00:09:25.843 SPDK Configuration: 00:09:25.843 Core mask: 0x1 00:09:25.843 00:09:25.843 Accel Perf Configuration: 00:09:25.843 Workload Type: decompress 00:09:25.843 Transfer size: 111250 bytes 00:09:25.843 Vector count 1 00:09:25.843 Module: software 00:09:25.843 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:25.843 Queue depth: 32 00:09:25.843 Allocate depth: 32 00:09:25.843 # threads/core: 2 00:09:25.843 Run time: 1 seconds 00:09:25.843 Verify: Yes 00:09:25.843 00:09:25.843 Running for 1 seconds... 00:09:25.843 00:09:25.843 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:25.843 ------------------------------------------------------------------------------------ 00:09:25.843 0,1 2496/s 103 MiB/s 0 0 00:09:25.843 0,0 2432/s 100 MiB/s 0 0 00:09:25.843 ==================================================================================== 00:09:25.843 Total 4928/s 522 MiB/s 0 0' 00:09:25.843 23:17:17 -- accel/accel.sh@20 -- # IFS=: 00:09:25.843 23:17:17 -- accel/accel.sh@20 -- # read -r var val 00:09:25.843 23:17:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:25.843 23:17:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:25.843 23:17:17 -- accel/accel.sh@12 -- # build_accel_config 00:09:25.843 23:17:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:25.843 23:17:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.843 23:17:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.843 23:17:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:25.843 23:17:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:25.843 23:17:17 -- accel/accel.sh@41 -- # local IFS=, 00:09:25.843 23:17:17 -- accel/accel.sh@42 -- # jq -r . 00:09:25.843 [2024-07-26 23:17:17.298254] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:25.843 [2024-07-26 23:17:17.298370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60935 ] 00:09:25.843 [2024-07-26 23:17:17.466386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.102 [2024-07-26 23:17:17.722165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.361 23:17:17 -- accel/accel.sh@21 -- # val= 00:09:26.361 23:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # IFS=: 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # read -r var val 00:09:26.361 23:17:17 -- accel/accel.sh@21 -- # val= 00:09:26.361 23:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # IFS=: 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # read -r var val 00:09:26.361 23:17:17 -- accel/accel.sh@21 -- # val= 00:09:26.361 23:17:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # IFS=: 00:09:26.361 23:17:17 -- accel/accel.sh@20 -- # read -r var val 00:09:26.361 23:17:18 -- accel/accel.sh@21 -- # val=0x1 00:09:26.361 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.361 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.361 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.361 23:17:18 -- accel/accel.sh@21 -- # val= 00:09:26.361 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.361 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.361 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.361 23:17:18 -- accel/accel.sh@21 -- # val= 00:09:26.361 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=decompress 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val= 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=software 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@23 -- # accel_module=software 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=32 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=32 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=2 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val=Yes 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val= 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:26.362 23:17:18 -- accel/accel.sh@21 -- # val= 00:09:26.362 23:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # IFS=: 00:09:26.362 23:17:18 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@21 -- # val= 00:09:28.902 23:17:20 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # IFS=: 00:09:28.902 23:17:20 -- accel/accel.sh@20 -- # read -r var val 00:09:28.902 23:17:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:28.902 23:17:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:28.902 23:17:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:28.902 00:09:28.902 real 0m5.614s 00:09:28.902 user 0m4.939s 00:09:28.902 sys 0m0.470s 00:09:28.902 23:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.902 ************************************ 00:09:28.902 END TEST accel_deomp_full_mthread 00:09:28.902 ************************************ 00:09:28.902 23:17:20 -- common/autotest_common.sh@10 -- # set +x 00:09:28.902 23:17:20 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:28.902 23:17:20 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:28.902 23:17:20 -- accel/accel.sh@129 -- # build_accel_config 00:09:28.902 23:17:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:28.902 23:17:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:28.902 23:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.902 23:17:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:28.902 23:17:20 -- common/autotest_common.sh@10 -- # set +x 00:09:28.902 23:17:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:28.903 23:17:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:28.903 23:17:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:28.903 23:17:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:28.903 23:17:20 -- accel/accel.sh@42 -- # jq -r . 00:09:28.903 ************************************ 00:09:28.903 START TEST accel_dif_functional_tests 00:09:28.903 ************************************ 00:09:28.903 23:17:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:28.903 [2024-07-26 23:17:20.225729] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:28.903 [2024-07-26 23:17:20.225846] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60988 ] 00:09:28.903 [2024-07-26 23:17:20.396821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.162 [2024-07-26 23:17:20.657627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.162 [2024-07-26 23:17:20.657782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.162 [2024-07-26 23:17:20.657816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.421 00:09:29.421 00:09:29.421 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.421 http://cunit.sourceforge.net/ 00:09:29.421 00:09:29.421 00:09:29.421 Suite: accel_dif 00:09:29.421 Test: verify: DIF generated, GUARD check ...passed 00:09:29.421 Test: verify: DIF generated, APPTAG check ...passed 00:09:29.421 Test: verify: DIF generated, REFTAG check ...passed 00:09:29.421 Test: verify: DIF not generated, GUARD check ...[2024-07-26 23:17:21.078828] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:29.421 passed 00:09:29.421 Test: verify: DIF not generated, APPTAG check ...passed 00:09:29.421 Test: verify: DIF not generated, REFTAG check ...passed 00:09:29.421 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:29.421 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:29.421 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:29.421 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-26 23:17:21.078920] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:29.421 [2024-07-26 23:17:21.079000] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:29.422 [2024-07-26 23:17:21.079044] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:29.422 [2024-07-26 23:17:21.079086] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:29.422 [2024-07-26 23:17:21.079118] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:29.422 [2024-07-26 23:17:21.079225] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:29.422 passed 00:09:29.422 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:29.422 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:29.422 Test: generate copy: DIF generated, GUARD check ...passed 00:09:29.422 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:29.422 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:29.422 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-26 23:17:21.079464] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:29.422 passed 00:09:29.422 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:29.422 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:29.422 Test: generate copy: iovecs-len validate ...passed 00:09:29.422 Test: generate copy: buffer alignment validate ...passed 00:09:29.422 00:09:29.422 Run Summary: Type Total Ran Passed Failed Inactive 00:09:29.422 suites 1 1 n/a 0 0 00:09:29.422 tests 20 20 20 0 0 00:09:29.422 asserts 204 204 204 0 n/a 00:09:29.422 00:09:29.422 Elapsed time = 0.005 seconds 00:09:29.422 [2024-07-26 23:17:21.079976] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:30.803 00:09:30.803 real 0m2.258s 00:09:30.803 user 0m4.385s 00:09:30.803 sys 0m0.326s 00:09:30.803 23:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.803 ************************************ 00:09:30.803 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.803 END TEST accel_dif_functional_tests 00:09:30.803 ************************************ 00:09:30.803 00:09:30.803 real 2m3.520s 00:09:30.803 user 2m12.241s 00:09:30.803 sys 0m12.577s 00:09:30.803 23:17:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.803 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.803 ************************************ 00:09:30.803 END TEST accel 00:09:30.803 ************************************ 00:09:30.803 23:17:22 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:30.803 23:17:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.803 23:17:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.803 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.803 ************************************ 00:09:30.803 START TEST accel_rpc 00:09:30.803 ************************************ 00:09:30.803 23:17:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:31.063 * Looking for test storage... 00:09:31.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:31.063 23:17:22 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:31.063 23:17:22 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=61075 00:09:31.063 23:17:22 -- accel/accel_rpc.sh@15 -- # waitforlisten 61075 00:09:31.063 23:17:22 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:31.063 23:17:22 -- common/autotest_common.sh@819 -- # '[' -z 61075 ']' 00:09:31.063 23:17:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.063 23:17:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:31.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.063 23:17:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.063 23:17:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:31.063 23:17:22 -- common/autotest_common.sh@10 -- # set +x 00:09:31.063 [2024-07-26 23:17:22.768693] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:31.063 [2024-07-26 23:17:22.768805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:09:31.322 [2024-07-26 23:17:22.939366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.581 [2024-07-26 23:17:23.190160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.581 [2024-07-26 23:17:23.190383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.840 23:17:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.840 23:17:23 -- common/autotest_common.sh@852 -- # return 0 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:31.840 23:17:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:31.840 23:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:31.840 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 ************************************ 00:09:31.840 START TEST accel_assign_opcode 00:09:31.840 ************************************ 00:09:31.840 23:17:23 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:31.840 23:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:31.840 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 [2024-07-26 23:17:23.538650] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:31.840 23:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:31.840 23:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:31.840 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:31.840 [2024-07-26 23:17:23.546576] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:31.840 23:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:31.840 23:17:23 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:31.840 23:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:31.840 23:17:23 -- common/autotest_common.sh@10 -- # set +x 00:09:32.777 23:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:32.777 23:17:24 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:32.777 23:17:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:32.777 23:17:24 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:32.777 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:09:32.777 23:17:24 -- accel/accel_rpc.sh@42 -- # grep software 00:09:32.777 23:17:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:32.777 software 00:09:32.777 00:09:32.777 real 0m0.925s 00:09:32.777 user 0m0.044s 00:09:32.777 sys 0m0.018s 00:09:32.777 23:17:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.777 23:17:24 -- common/autotest_common.sh@10 -- # set +x 00:09:32.777 ************************************ 00:09:32.777 END TEST accel_assign_opcode 00:09:32.777 ************************************ 00:09:32.777 23:17:24 -- accel/accel_rpc.sh@55 -- # killprocess 61075 00:09:32.777 23:17:24 -- common/autotest_common.sh@926 -- # '[' -z 61075 ']' 00:09:32.777 23:17:24 -- common/autotest_common.sh@930 -- # kill -0 61075 00:09:32.777 23:17:24 -- common/autotest_common.sh@931 -- # uname 00:09:32.777 23:17:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:32.777 23:17:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61075 00:09:33.037 23:17:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.037 killing process with pid 61075 00:09:33.037 23:17:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.037 23:17:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61075' 00:09:33.037 23:17:24 -- common/autotest_common.sh@945 -- # kill 61075 00:09:33.037 23:17:24 -- common/autotest_common.sh@950 -- # wait 61075 00:09:35.573 00:09:35.573 real 0m4.285s 00:09:35.573 user 0m4.057s 00:09:35.573 sys 0m0.611s 00:09:35.573 23:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.573 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:35.573 ************************************ 00:09:35.573 END TEST accel_rpc 00:09:35.573 ************************************ 00:09:35.573 23:17:26 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:35.573 23:17:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.573 23:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.573 23:17:26 -- common/autotest_common.sh@10 -- # set +x 00:09:35.573 ************************************ 00:09:35.573 START TEST app_cmdline 00:09:35.573 ************************************ 00:09:35.573 23:17:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:35.573 * Looking for test storage... 00:09:35.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:35.573 23:17:27 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:35.573 23:17:27 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61190 00:09:35.573 23:17:27 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:35.573 23:17:27 -- app/cmdline.sh@18 -- # waitforlisten 61190 00:09:35.573 23:17:27 -- common/autotest_common.sh@819 -- # '[' -z 61190 ']' 00:09:35.573 23:17:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.573 23:17:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.573 23:17:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.573 23:17:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.573 23:17:27 -- common/autotest_common.sh@10 -- # set +x 00:09:35.573 [2024-07-26 23:17:27.126555] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:35.573 [2024-07-26 23:17:27.127020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61190 ] 00:09:35.573 [2024-07-26 23:17:27.296767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.832 [2024-07-26 23:17:27.505504] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.832 [2024-07-26 23:17:27.505690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.210 23:17:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:37.210 23:17:28 -- common/autotest_common.sh@852 -- # return 0 00:09:37.210 23:17:28 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:37.210 { 00:09:37.210 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:09:37.210 "fields": { 00:09:37.210 "major": 24, 00:09:37.210 "minor": 1, 00:09:37.210 "patch": 1, 00:09:37.210 "suffix": "-pre", 00:09:37.210 "commit": "dbef7efac" 00:09:37.210 } 00:09:37.210 } 00:09:37.210 23:17:28 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:37.210 23:17:28 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:37.210 23:17:28 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:37.210 23:17:28 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:37.210 23:17:28 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:37.210 23:17:28 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:37.210 23:17:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:37.210 23:17:28 -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 23:17:28 -- app/cmdline.sh@26 -- # sort 00:09:37.210 23:17:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:37.210 23:17:28 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:37.210 23:17:28 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:37.210 23:17:28 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.210 23:17:28 -- common/autotest_common.sh@640 -- # local es=0 00:09:37.210 23:17:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.210 23:17:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.210 23:17:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.210 23:17:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.210 23:17:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.210 23:17:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.210 23:17:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:37.210 23:17:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.210 23:17:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.210 23:17:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:37.210 request: 00:09:37.210 { 00:09:37.210 "method": "env_dpdk_get_mem_stats", 00:09:37.210 "req_id": 1 00:09:37.210 } 00:09:37.210 Got JSON-RPC error response 00:09:37.210 response: 00:09:37.210 { 00:09:37.210 "code": -32601, 00:09:37.210 "message": "Method not found" 00:09:37.210 } 00:09:37.210 23:17:28 -- common/autotest_common.sh@643 -- # es=1 00:09:37.210 23:17:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:37.210 23:17:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:37.210 23:17:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:37.210 23:17:28 -- app/cmdline.sh@1 -- # killprocess 61190 00:09:37.210 23:17:28 -- common/autotest_common.sh@926 -- # '[' -z 61190 ']' 00:09:37.210 23:17:28 -- common/autotest_common.sh@930 -- # kill -0 61190 00:09:37.210 23:17:28 -- common/autotest_common.sh@931 -- # uname 00:09:37.210 23:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:37.210 23:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61190 00:09:37.469 killing process with pid 61190 00:09:37.469 23:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:37.469 23:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:37.469 23:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61190' 00:09:37.469 23:17:28 -- common/autotest_common.sh@945 -- # kill 61190 00:09:37.469 23:17:28 -- common/autotest_common.sh@950 -- # wait 61190 00:09:40.005 00:09:40.005 real 0m4.360s 00:09:40.005 user 0m4.567s 00:09:40.005 sys 0m0.615s 00:09:40.005 23:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.005 ************************************ 00:09:40.005 END TEST app_cmdline 00:09:40.005 ************************************ 00:09:40.005 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.005 23:17:31 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:40.005 23:17:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:40.005 23:17:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.005 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.005 ************************************ 00:09:40.005 START TEST version 00:09:40.005 ************************************ 00:09:40.006 23:17:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:40.006 * Looking for test storage... 00:09:40.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:40.006 23:17:31 -- app/version.sh@17 -- # get_header_version major 00:09:40.006 23:17:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.006 23:17:31 -- app/version.sh@14 -- # cut -f2 00:09:40.006 23:17:31 -- app/version.sh@14 -- # tr -d '"' 00:09:40.006 23:17:31 -- app/version.sh@17 -- # major=24 00:09:40.006 23:17:31 -- app/version.sh@18 -- # get_header_version minor 00:09:40.006 23:17:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.006 23:17:31 -- app/version.sh@14 -- # cut -f2 00:09:40.006 23:17:31 -- app/version.sh@14 -- # tr -d '"' 00:09:40.006 23:17:31 -- app/version.sh@18 -- # minor=1 00:09:40.006 23:17:31 -- app/version.sh@19 -- # get_header_version patch 00:09:40.006 23:17:31 -- app/version.sh@14 -- # cut -f2 00:09:40.006 23:17:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.006 23:17:31 -- app/version.sh@14 -- # tr -d '"' 00:09:40.006 23:17:31 -- app/version.sh@19 -- # patch=1 00:09:40.006 23:17:31 -- app/version.sh@20 -- # get_header_version suffix 00:09:40.006 23:17:31 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.006 23:17:31 -- app/version.sh@14 -- # cut -f2 00:09:40.006 23:17:31 -- app/version.sh@14 -- # tr -d '"' 00:09:40.006 23:17:31 -- app/version.sh@20 -- # suffix=-pre 00:09:40.006 23:17:31 -- app/version.sh@22 -- # version=24.1 00:09:40.006 23:17:31 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:40.006 23:17:31 -- app/version.sh@25 -- # version=24.1.1 00:09:40.006 23:17:31 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:40.006 23:17:31 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:40.006 23:17:31 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:40.006 23:17:31 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:40.006 23:17:31 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:40.006 00:09:40.006 real 0m0.229s 00:09:40.006 user 0m0.121s 00:09:40.006 sys 0m0.157s 00:09:40.006 ************************************ 00:09:40.006 END TEST version 00:09:40.006 ************************************ 00:09:40.006 23:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.006 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.006 23:17:31 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:40.006 23:17:31 -- spdk/autotest.sh@204 -- # uname -s 00:09:40.006 23:17:31 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:40.006 23:17:31 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:40.006 23:17:31 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:40.006 23:17:31 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:09:40.006 23:17:31 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:40.006 23:17:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:40.006 23:17:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.006 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.006 ************************************ 00:09:40.006 START TEST blockdev_nvme 00:09:40.006 ************************************ 00:09:40.006 23:17:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:40.006 * Looking for test storage... 00:09:40.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:40.266 23:17:31 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:40.266 23:17:31 -- bdev/nbd_common.sh@6 -- # set -e 00:09:40.266 23:17:31 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:40.266 23:17:31 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:40.266 23:17:31 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:40.266 23:17:31 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:40.266 23:17:31 -- bdev/blockdev.sh@18 -- # : 00:09:40.266 23:17:31 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:40.266 23:17:31 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:40.266 23:17:31 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:40.266 23:17:31 -- bdev/blockdev.sh@672 -- # uname -s 00:09:40.266 23:17:31 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:40.266 23:17:31 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:40.266 23:17:31 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:09:40.266 23:17:31 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:40.266 23:17:31 -- bdev/blockdev.sh@682 -- # dek= 00:09:40.266 23:17:31 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:40.266 23:17:31 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:40.266 23:17:31 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:40.266 23:17:31 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:09:40.266 23:17:31 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:09:40.266 23:17:31 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:40.266 23:17:31 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61362 00:09:40.266 23:17:31 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:40.266 23:17:31 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:40.266 23:17:31 -- bdev/blockdev.sh@47 -- # waitforlisten 61362 00:09:40.266 23:17:31 -- common/autotest_common.sh@819 -- # '[' -z 61362 ']' 00:09:40.266 23:17:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.266 23:17:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.266 23:17:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.266 23:17:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.266 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:09:40.266 [2024-07-26 23:17:31.890047] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:40.266 [2024-07-26 23:17:31.890710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61362 ] 00:09:40.525 [2024-07-26 23:17:32.061237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.525 [2024-07-26 23:17:32.274045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.525 [2024-07-26 23:17:32.274481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.902 23:17:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:41.902 23:17:33 -- common/autotest_common.sh@852 -- # return 0 00:09:41.902 23:17:33 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:41.902 23:17:33 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:09:41.902 23:17:33 -- bdev/blockdev.sh@79 -- # local json 00:09:41.902 23:17:33 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:41.902 23:17:33 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.902 23:17:33 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:41.903 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:41.903 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.160 23:17:33 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:42.160 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.160 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.160 23:17:33 -- bdev/blockdev.sh@738 -- # cat 00:09:42.160 23:17:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:42.160 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.160 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.160 23:17:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:42.160 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.160 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.160 23:17:33 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:42.160 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.160 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.160 23:17:33 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:42.160 23:17:33 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:42.160 23:17:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:42.160 23:17:33 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:42.160 23:17:33 -- common/autotest_common.sh@10 -- # set +x 00:09:42.160 23:17:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:42.418 23:17:33 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:42.419 23:17:33 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:42.419 23:17:33 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7701424c-059e-41e6-b7b9-ad3903e2672c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7701424c-059e-41e6-b7b9-ad3903e2672c",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "57473aaf-9b5b-41f5-b4bf-3a7f306727d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "57473aaf-9b5b-41f5-b4bf-3a7f306727d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b006e202-7faf-4719-b563-e7c097ac5e24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b006e202-7faf-4719-b563-e7c097ac5e24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ce15a5c7-1568-4a5d-bd13-0cf8edf33f6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce15a5c7-1568-4a5d-bd13-0cf8edf33f6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c6daa1e6-024a-47e4-8b5c-d5099b1f04ee"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6daa1e6-024a-47e4-8b5c-d5099b1f04ee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b02a0156-8095-457b-b894-17ccf4f67d1e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b02a0156-8095-457b-b894-17ccf4f67d1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:42.419 23:17:33 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:42.419 23:17:33 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:09:42.419 23:17:33 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:42.419 23:17:33 -- bdev/blockdev.sh@752 -- # killprocess 61362 00:09:42.419 23:17:33 -- common/autotest_common.sh@926 -- # '[' -z 61362 ']' 00:09:42.419 23:17:33 -- common/autotest_common.sh@930 -- # kill -0 61362 00:09:42.419 23:17:33 -- common/autotest_common.sh@931 -- # uname 00:09:42.419 23:17:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:42.419 23:17:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61362 00:09:42.419 killing process with pid 61362 00:09:42.419 23:17:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:42.419 23:17:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:42.419 23:17:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61362' 00:09:42.419 23:17:33 -- common/autotest_common.sh@945 -- # kill 61362 00:09:42.419 23:17:33 -- common/autotest_common.sh@950 -- # wait 61362 00:09:44.954 23:17:36 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:44.954 23:17:36 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:44.954 23:17:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:44.954 23:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.954 23:17:36 -- common/autotest_common.sh@10 -- # set +x 00:09:44.954 ************************************ 00:09:44.954 START TEST bdev_hello_world 00:09:44.954 ************************************ 00:09:44.954 23:17:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:44.954 [2024-07-26 23:17:36.560880] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:44.954 [2024-07-26 23:17:36.561006] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ] 00:09:45.214 [2024-07-26 23:17:36.732934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.473 [2024-07-26 23:17:36.996359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.042 [2024-07-26 23:17:37.719409] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:46.042 [2024-07-26 23:17:37.719462] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:46.042 [2024-07-26 23:17:37.719487] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:46.042 [2024-07-26 23:17:37.722757] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:46.042 [2024-07-26 23:17:37.723246] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:46.042 [2024-07-26 23:17:37.723275] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:46.042 [2024-07-26 23:17:37.723721] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:46.042 00:09:46.042 [2024-07-26 23:17:37.723752] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:47.421 00:09:47.421 real 0m2.452s 00:09:47.422 user 0m2.014s 00:09:47.422 sys 0m0.330s 00:09:47.422 ************************************ 00:09:47.422 END TEST bdev_hello_world 00:09:47.422 ************************************ 00:09:47.422 23:17:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.422 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 23:17:38 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:47.422 23:17:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:47.422 23:17:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.422 23:17:38 -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 ************************************ 00:09:47.422 START TEST bdev_bounds 00:09:47.422 ************************************ 00:09:47.422 23:17:39 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:47.422 Process bdevio pid: 61510 00:09:47.422 23:17:39 -- bdev/blockdev.sh@288 -- # bdevio_pid=61510 00:09:47.422 23:17:39 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:47.422 23:17:39 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:47.422 23:17:39 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61510' 00:09:47.422 23:17:39 -- bdev/blockdev.sh@291 -- # waitforlisten 61510 00:09:47.422 23:17:39 -- common/autotest_common.sh@819 -- # '[' -z 61510 ']' 00:09:47.422 23:17:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.422 23:17:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.422 23:17:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.422 23:17:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:47.422 23:17:39 -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 [2024-07-26 23:17:39.099817] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:47.422 [2024-07-26 23:17:39.099928] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61510 ] 00:09:47.681 [2024-07-26 23:17:39.273578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.941 [2024-07-26 23:17:39.538588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.941 [2024-07-26 23:17:39.538788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.941 [2024-07-26 23:17:39.538792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.877 23:17:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:48.877 23:17:40 -- common/autotest_common.sh@852 -- # return 0 00:09:48.877 23:17:40 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:49.137 I/O targets: 00:09:49.137 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:49.137 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:49.137 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:49.137 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:49.137 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:49.137 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:49.137 00:09:49.137 00:09:49.137 CUnit - A unit testing framework for C - Version 2.1-3 00:09:49.137 http://cunit.sourceforge.net/ 00:09:49.137 00:09:49.137 00:09:49.137 Suite: bdevio tests on: Nvme3n1 00:09:49.137 Test: blockdev write read block ...passed 00:09:49.137 Test: blockdev write zeroes read block ...passed 00:09:49.137 Test: blockdev write zeroes read no split ...passed 00:09:49.137 Test: blockdev write zeroes read split ...passed 00:09:49.137 Test: blockdev write zeroes read split partial ...passed 00:09:49.137 Test: blockdev reset ...[2024-07-26 23:17:40.720806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:49.137 [2024-07-26 23:17:40.725220] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.137 passed 00:09:49.137 Test: blockdev write read 8 blocks ...passed 00:09:49.137 Test: blockdev write read size > 128k ...passed 00:09:49.137 Test: blockdev write read invalid size ...passed 00:09:49.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.137 Test: blockdev write read max offset ...passed 00:09:49.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.137 Test: blockdev writev readv 8 blocks ...passed 00:09:49.137 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.137 Test: blockdev writev readv block ...passed 00:09:49.137 Test: blockdev writev readv size > 128k ...passed 00:09:49.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.137 Test: blockdev comparev and writev ...[2024-07-26 23:17:40.737455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:49.137 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x27800e000 len:0x1000 00:09:49.137 [2024-07-26 23:17:40.737670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:49.137 passed 00:09:49.137 Test: blockdev nvme passthru vendor specific ...passed 00:09:49.137 Test: blockdev nvme admin passthru ...[2024-07-26 23:17:40.738665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:49.137 [2024-07-26 23:17:40.738706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:49.137 passed 00:09:49.137 Test: blockdev copy ...passed 00:09:49.137 Suite: bdevio tests on: Nvme2n3 00:09:49.137 Test: blockdev write read block ...passed 00:09:49.137 Test: blockdev write zeroes read block ...passed 00:09:49.137 Test: blockdev write zeroes read no split ...passed 00:09:49.137 Test: blockdev write zeroes read split ...passed 00:09:49.137 Test: blockdev write zeroes read split partial ...passed 00:09:49.137 Test: blockdev reset ...[2024-07-26 23:17:40.816938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:49.137 [2024-07-26 23:17:40.821327] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.137 passed 00:09:49.137 Test: blockdev write read 8 blocks ...passed 00:09:49.137 Test: blockdev write read size > 128k ...passed 00:09:49.137 Test: blockdev write read invalid size ...passed 00:09:49.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.137 Test: blockdev write read max offset ...passed 00:09:49.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.137 Test: blockdev writev readv 8 blocks ...passed 00:09:49.137 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.137 Test: blockdev writev readv block ...passed 00:09:49.137 Test: blockdev writev readv size > 128k ...passed 00:09:49.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.137 Test: blockdev comparev and writev ...[2024-07-26 23:17:40.832034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27800a000 len:0x1000 00:09:49.137 [2024-07-26 23:17:40.832233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:49.137 passed 00:09:49.137 Test: blockdev nvme passthru rw ...passed 00:09:49.137 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:17:40.833718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:49.137 [2024-07-26 23:17:40.833821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:49.137 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:49.137 passed 00:09:49.137 Test: blockdev copy ...passed 00:09:49.137 Suite: bdevio tests on: Nvme2n2 00:09:49.137 Test: blockdev write read block ...passed 00:09:49.137 Test: blockdev write zeroes read block ...passed 00:09:49.137 Test: blockdev write zeroes read no split ...passed 00:09:49.137 Test: blockdev write zeroes read split ...passed 00:09:49.397 Test: blockdev write zeroes read split partial ...passed 00:09:49.397 Test: blockdev reset ...[2024-07-26 23:17:40.909564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:49.397 [2024-07-26 23:17:40.913750] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.397 passed 00:09:49.397 Test: blockdev write read 8 blocks ...passed 00:09:49.397 Test: blockdev write read size > 128k ...passed 00:09:49.397 Test: blockdev write read invalid size ...passed 00:09:49.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.397 Test: blockdev write read max offset ...passed 00:09:49.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.397 Test: blockdev writev readv 8 blocks ...passed 00:09:49.397 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.397 Test: blockdev writev readv block ...passed 00:09:49.397 Test: blockdev writev readv size > 128k ...passed 00:09:49.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.397 Test: blockdev comparev and writev ...[2024-07-26 23:17:40.924857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c606000 len:0x1000 00:09:49.397 [2024-07-26 23:17:40.925053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:49.397 passed 00:09:49.397 Test: blockdev nvme passthru rw ...passed 00:09:49.397 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:17:40.926424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:49.397 passed[2024-07-26 23:17:40.926588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:49.397 00:09:49.397 Test: blockdev nvme admin passthru ...passed 00:09:49.397 Test: blockdev copy ...passed 00:09:49.397 Suite: bdevio tests on: Nvme2n1 00:09:49.397 Test: blockdev write read block ...passed 00:09:49.397 Test: blockdev write zeroes read block ...passed 00:09:49.397 Test: blockdev write zeroes read no split ...passed 00:09:49.397 Test: blockdev write zeroes read split ...passed 00:09:49.397 Test: blockdev write zeroes read split partial ...passed 00:09:49.397 Test: blockdev reset ...[2024-07-26 23:17:41.002552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:49.397 [2024-07-26 23:17:41.006629] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.397 passed 00:09:49.397 Test: blockdev write read 8 blocks ...passed 00:09:49.397 Test: blockdev write read size > 128k ...passed 00:09:49.397 Test: blockdev write read invalid size ...passed 00:09:49.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.397 Test: blockdev write read max offset ...passed 00:09:49.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.397 Test: blockdev writev readv 8 blocks ...passed 00:09:49.397 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.397 Test: blockdev writev readv block ...passed 00:09:49.397 Test: blockdev writev readv size > 128k ...passed 00:09:49.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.397 Test: blockdev comparev and writev ...[2024-07-26 23:17:41.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c601000 len:0x1000 00:09:49.397 [2024-07-26 23:17:41.018077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:49.397 passed 00:09:49.397 Test: blockdev nvme passthru rw ...passed 00:09:49.397 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:17:41.019519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:49.397 [2024-07-26 23:17:41.019697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:49.397 passed 00:09:49.397 Test: blockdev nvme admin passthru ...passed 00:09:49.397 Test: blockdev copy ...passed 00:09:49.397 Suite: bdevio tests on: Nvme1n1 00:09:49.397 Test: blockdev write read block ...passed 00:09:49.397 Test: blockdev write zeroes read block ...passed 00:09:49.397 Test: blockdev write zeroes read no split ...passed 00:09:49.397 Test: blockdev write zeroes read split ...passed 00:09:49.397 Test: blockdev write zeroes read split partial ...passed 00:09:49.397 Test: blockdev reset ...[2024-07-26 23:17:41.097923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:49.397 [2024-07-26 23:17:41.101778] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.397 passed 00:09:49.397 Test: blockdev write read 8 blocks ...passed 00:09:49.397 Test: blockdev write read size > 128k ...passed 00:09:49.397 Test: blockdev write read invalid size ...passed 00:09:49.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.397 Test: blockdev write read max offset ...passed 00:09:49.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.397 Test: blockdev writev readv 8 blocks ...passed 00:09:49.398 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.398 Test: blockdev writev readv block ...passed 00:09:49.398 Test: blockdev writev readv size > 128k ...passed 00:09:49.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.398 Test: blockdev comparev and writev ...[2024-07-26 23:17:41.111822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c206000 len:0x1000 00:09:49.398 [2024-07-26 23:17:41.111877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:49.398 passed 00:09:49.398 Test: blockdev nvme passthru rw ...passed 00:09:49.398 Test: blockdev nvme passthru vendor specific ...passed 00:09:49.398 Test: blockdev nvme admin passthru ...[2024-07-26 23:17:41.112934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:49.398 [2024-07-26 23:17:41.112986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:49.398 passed 00:09:49.398 Test: blockdev copy ...passed 00:09:49.398 Suite: bdevio tests on: Nvme0n1 00:09:49.398 Test: blockdev write read block ...passed 00:09:49.398 Test: blockdev write zeroes read block ...passed 00:09:49.398 Test: blockdev write zeroes read no split ...passed 00:09:49.657 Test: blockdev write zeroes read split ...passed 00:09:49.657 Test: blockdev write zeroes read split partial ...passed 00:09:49.657 Test: blockdev reset ...[2024-07-26 23:17:41.193676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:49.657 [2024-07-26 23:17:41.197601] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.657 passed 00:09:49.657 Test: blockdev write read 8 blocks ...passed 00:09:49.657 Test: blockdev write read size > 128k ...passed 00:09:49.657 Test: blockdev write read invalid size ...passed 00:09:49.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.657 Test: blockdev write read max offset ...passed 00:09:49.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.657 Test: blockdev writev readv 8 blocks ...passed 00:09:49.657 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.657 Test: blockdev writev readv block ...passed 00:09:49.657 Test: blockdev writev readv size > 128k ...passed 00:09:49.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.657 Test: blockdev comparev and writev ...[2024-07-26 23:17:41.207479] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:49.657 separate metadata which is not supported yet. 00:09:49.657 passed 00:09:49.657 Test: blockdev nvme passthru rw ...passed 00:09:49.657 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:17:41.208578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:49.657 [2024-07-26 23:17:41.208703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:09:49.657 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:09:49.657 passed 00:09:49.657 Test: blockdev copy ...passed 00:09:49.657 00:09:49.657 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.657 suites 6 6 n/a 0 0 00:09:49.657 tests 138 138 138 0 0 00:09:49.657 asserts 893 893 893 0 n/a 00:09:49.657 00:09:49.657 Elapsed time = 1.538 seconds 00:09:49.657 0 00:09:49.657 23:17:41 -- bdev/blockdev.sh@293 -- # killprocess 61510 00:09:49.657 23:17:41 -- common/autotest_common.sh@926 -- # '[' -z 61510 ']' 00:09:49.657 23:17:41 -- common/autotest_common.sh@930 -- # kill -0 61510 00:09:49.657 23:17:41 -- common/autotest_common.sh@931 -- # uname 00:09:49.657 23:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.657 23:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61510 00:09:49.657 killing process with pid 61510 00:09:49.657 23:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:49.657 23:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:49.657 23:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61510' 00:09:49.657 23:17:41 -- common/autotest_common.sh@945 -- # kill 61510 00:09:49.657 23:17:41 -- common/autotest_common.sh@950 -- # wait 61510 00:09:51.096 23:17:42 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:51.096 00:09:51.096 real 0m3.407s 00:09:51.096 user 0m8.346s 00:09:51.096 sys 0m0.544s 00:09:51.096 23:17:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.096 ************************************ 00:09:51.096 END TEST bdev_bounds 00:09:51.096 ************************************ 00:09:51.096 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.096 23:17:42 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:51.096 23:17:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:51.096 23:17:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:51.096 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.096 ************************************ 00:09:51.096 START TEST bdev_nbd 00:09:51.096 ************************************ 00:09:51.096 23:17:42 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:51.096 23:17:42 -- bdev/blockdev.sh@298 -- # uname -s 00:09:51.096 23:17:42 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:51.096 23:17:42 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.096 23:17:42 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:51.096 23:17:42 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.096 23:17:42 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:51.096 23:17:42 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:09:51.096 23:17:42 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:51.097 23:17:42 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:51.097 23:17:42 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:51.097 23:17:42 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:09:51.097 23:17:42 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.097 23:17:42 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:51.097 23:17:42 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.097 23:17:42 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:51.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:51.097 23:17:42 -- bdev/blockdev.sh@316 -- # nbd_pid=61588 00:09:51.097 23:17:42 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:51.097 23:17:42 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:51.097 23:17:42 -- bdev/blockdev.sh@318 -- # waitforlisten 61588 /var/tmp/spdk-nbd.sock 00:09:51.097 23:17:42 -- common/autotest_common.sh@819 -- # '[' -z 61588 ']' 00:09:51.097 23:17:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:51.097 23:17:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.097 23:17:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:51.097 23:17:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.097 23:17:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.097 [2024-07-26 23:17:42.589753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:51.097 [2024-07-26 23:17:42.590030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.097 [2024-07-26 23:17:42.759329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.356 [2024-07-26 23:17:43.018471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.303 23:17:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.303 23:17:44 -- common/autotest_common.sh@852 -- # return 0 00:09:52.303 23:17:44 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:52.303 23:17:44 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.303 23:17:44 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.303 23:17:44 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@24 -- # local i 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:52.304 23:17:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:52.563 23:17:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:52.563 23:17:44 -- common/autotest_common.sh@857 -- # local i 00:09:52.563 23:17:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.563 23:17:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.563 23:17:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:52.563 23:17:44 -- common/autotest_common.sh@861 -- # break 00:09:52.563 23:17:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.563 23:17:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.563 23:17:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.563 1+0 records in 00:09:52.563 1+0 records out 00:09:52.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746544 s, 5.5 MB/s 00:09:52.563 23:17:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.563 23:17:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:52.563 23:17:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.563 23:17:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:52.563 23:17:44 -- common/autotest_common.sh@877 -- # return 0 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:52.563 23:17:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:52.822 23:17:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:52.822 23:17:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:52.822 23:17:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:52.822 23:17:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:52.822 23:17:44 -- common/autotest_common.sh@857 -- # local i 00:09:52.822 23:17:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.822 23:17:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.822 23:17:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:52.822 23:17:44 -- common/autotest_common.sh@861 -- # break 00:09:52.822 23:17:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.822 23:17:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.822 23:17:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.822 1+0 records in 00:09:52.822 1+0 records out 00:09:52.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647701 s, 6.3 MB/s 00:09:52.822 23:17:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.822 23:17:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:52.822 23:17:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.822 23:17:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:52.822 23:17:44 -- common/autotest_common.sh@877 -- # return 0 00:09:52.822 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.822 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:52.823 23:17:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:53.082 23:17:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:53.082 23:17:44 -- common/autotest_common.sh@857 -- # local i 00:09:53.082 23:17:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:53.082 23:17:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:53.082 23:17:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:53.082 23:17:44 -- common/autotest_common.sh@861 -- # break 00:09:53.082 23:17:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:53.082 23:17:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:53.082 23:17:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.082 1+0 records in 00:09:53.082 1+0 records out 00:09:53.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731155 s, 5.6 MB/s 00:09:53.082 23:17:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.082 23:17:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:53.082 23:17:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.082 23:17:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:53.082 23:17:44 -- common/autotest_common.sh@877 -- # return 0 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:53.082 23:17:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:53.341 23:17:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:53.341 23:17:44 -- common/autotest_common.sh@857 -- # local i 00:09:53.341 23:17:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:53.341 23:17:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:53.341 23:17:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:53.341 23:17:44 -- common/autotest_common.sh@861 -- # break 00:09:53.341 23:17:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:53.341 23:17:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:53.341 23:17:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.341 1+0 records in 00:09:53.341 1+0 records out 00:09:53.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641421 s, 6.4 MB/s 00:09:53.341 23:17:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.341 23:17:44 -- common/autotest_common.sh@874 -- # size=4096 00:09:53.341 23:17:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.341 23:17:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:53.341 23:17:44 -- common/autotest_common.sh@877 -- # return 0 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:53.341 23:17:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:53.601 23:17:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:53.601 23:17:45 -- common/autotest_common.sh@857 -- # local i 00:09:53.601 23:17:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:53.601 23:17:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:53.601 23:17:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:53.601 23:17:45 -- common/autotest_common.sh@861 -- # break 00:09:53.601 23:17:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:53.601 23:17:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:53.601 23:17:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.601 1+0 records in 00:09:53.601 1+0 records out 00:09:53.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928726 s, 4.4 MB/s 00:09:53.601 23:17:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.601 23:17:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:53.601 23:17:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.601 23:17:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:53.601 23:17:45 -- common/autotest_common.sh@877 -- # return 0 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:53.601 23:17:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:53.860 23:17:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:53.860 23:17:45 -- common/autotest_common.sh@857 -- # local i 00:09:53.860 23:17:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:53.860 23:17:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:53.860 23:17:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:53.860 23:17:45 -- common/autotest_common.sh@861 -- # break 00:09:53.860 23:17:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:53.860 23:17:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:53.860 23:17:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.860 1+0 records in 00:09:53.860 1+0 records out 00:09:53.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856285 s, 4.8 MB/s 00:09:53.860 23:17:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.860 23:17:45 -- common/autotest_common.sh@874 -- # size=4096 00:09:53.860 23:17:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:53.860 23:17:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:53.860 23:17:45 -- common/autotest_common.sh@877 -- # return 0 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.860 23:17:45 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd0", 00:09:53.860 "bdev_name": "Nvme0n1" 00:09:53.860 }, 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd1", 00:09:53.860 "bdev_name": "Nvme1n1" 00:09:53.860 }, 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd2", 00:09:53.860 "bdev_name": "Nvme2n1" 00:09:53.860 }, 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd3", 00:09:53.860 "bdev_name": "Nvme2n2" 00:09:53.860 }, 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd4", 00:09:53.860 "bdev_name": "Nvme2n3" 00:09:53.860 }, 00:09:53.860 { 00:09:53.860 "nbd_device": "/dev/nbd5", 00:09:53.860 "bdev_name": "Nvme3n1" 00:09:53.860 } 00:09:53.860 ]' 00:09:53.861 23:17:45 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:53.861 23:17:45 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd0", 00:09:53.861 "bdev_name": "Nvme0n1" 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd1", 00:09:53.861 "bdev_name": "Nvme1n1" 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd2", 00:09:53.861 "bdev_name": "Nvme2n1" 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd3", 00:09:53.861 "bdev_name": "Nvme2n2" 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd4", 00:09:53.861 "bdev_name": "Nvme2n3" 00:09:53.861 }, 00:09:53.861 { 00:09:53.861 "nbd_device": "/dev/nbd5", 00:09:53.861 "bdev_name": "Nvme3n1" 00:09:53.861 } 00:09:53.861 ]' 00:09:53.861 23:17:45 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@51 -- # local i 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@41 -- # break 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.120 23:17:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@41 -- # break 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.378 23:17:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@41 -- # break 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.637 23:17:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@41 -- # break 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@41 -- # break 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.895 23:17:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@41 -- # break 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.153 23:17:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@65 -- # true 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@65 -- # count=0 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@122 -- # count=0 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@127 -- # return 0 00:09:55.411 23:17:47 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@12 -- # local i 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:55.411 23:17:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:55.670 /dev/nbd0 00:09:55.670 23:17:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.670 23:17:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.670 23:17:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:55.670 23:17:47 -- common/autotest_common.sh@857 -- # local i 00:09:55.670 23:17:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:55.670 23:17:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:55.670 23:17:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:55.670 23:17:47 -- common/autotest_common.sh@861 -- # break 00:09:55.670 23:17:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:55.670 23:17:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:55.670 23:17:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.670 1+0 records in 00:09:55.670 1+0 records out 00:09:55.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610843 s, 6.7 MB/s 00:09:55.670 23:17:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.670 23:17:47 -- common/autotest_common.sh@874 -- # size=4096 00:09:55.670 23:17:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.670 23:17:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:55.670 23:17:47 -- common/autotest_common.sh@877 -- # return 0 00:09:55.670 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.670 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:55.670 23:17:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:55.928 /dev/nbd1 00:09:55.928 23:17:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.928 23:17:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.928 23:17:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:55.928 23:17:47 -- common/autotest_common.sh@857 -- # local i 00:09:55.929 23:17:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:55.929 23:17:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:55.929 23:17:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:55.929 23:17:47 -- common/autotest_common.sh@861 -- # break 00:09:55.929 23:17:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:55.929 23:17:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:55.929 23:17:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.929 1+0 records in 00:09:55.929 1+0 records out 00:09:55.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811838 s, 5.0 MB/s 00:09:55.929 23:17:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.929 23:17:47 -- common/autotest_common.sh@874 -- # size=4096 00:09:55.929 23:17:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.929 23:17:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:55.929 23:17:47 -- common/autotest_common.sh@877 -- # return 0 00:09:55.929 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.929 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:55.929 23:17:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:56.188 /dev/nbd10 00:09:56.188 23:17:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:56.188 23:17:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:56.188 23:17:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:56.188 23:17:47 -- common/autotest_common.sh@857 -- # local i 00:09:56.188 23:17:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:56.188 23:17:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:56.188 23:17:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:56.188 23:17:47 -- common/autotest_common.sh@861 -- # break 00:09:56.188 23:17:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:56.188 23:17:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:56.188 23:17:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.188 1+0 records in 00:09:56.188 1+0 records out 00:09:56.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575179 s, 7.1 MB/s 00:09:56.188 23:17:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.188 23:17:47 -- common/autotest_common.sh@874 -- # size=4096 00:09:56.188 23:17:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.188 23:17:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:56.188 23:17:47 -- common/autotest_common.sh@877 -- # return 0 00:09:56.188 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.188 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:56.188 23:17:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:56.188 /dev/nbd11 00:09:56.446 23:17:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:56.446 23:17:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:56.446 23:17:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:56.446 23:17:47 -- common/autotest_common.sh@857 -- # local i 00:09:56.446 23:17:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:56.446 23:17:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:56.446 23:17:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:56.446 23:17:47 -- common/autotest_common.sh@861 -- # break 00:09:56.446 23:17:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:56.446 23:17:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:56.446 23:17:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.446 1+0 records in 00:09:56.447 1+0 records out 00:09:56.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630138 s, 6.5 MB/s 00:09:56.447 23:17:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.447 23:17:47 -- common/autotest_common.sh@874 -- # size=4096 00:09:56.447 23:17:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.447 23:17:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:56.447 23:17:47 -- common/autotest_common.sh@877 -- # return 0 00:09:56.447 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.447 23:17:47 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:56.447 23:17:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:56.447 /dev/nbd12 00:09:56.447 23:17:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:56.447 23:17:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:56.447 23:17:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:56.447 23:17:48 -- common/autotest_common.sh@857 -- # local i 00:09:56.447 23:17:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:56.447 23:17:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:56.447 23:17:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:56.447 23:17:48 -- common/autotest_common.sh@861 -- # break 00:09:56.447 23:17:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:56.447 23:17:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:56.447 23:17:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.447 1+0 records in 00:09:56.447 1+0 records out 00:09:56.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738764 s, 5.5 MB/s 00:09:56.447 23:17:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.447 23:17:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:56.447 23:17:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.705 23:17:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:56.705 23:17:48 -- common/autotest_common.sh@877 -- # return 0 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:56.705 /dev/nbd13 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:56.705 23:17:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:56.705 23:17:48 -- common/autotest_common.sh@857 -- # local i 00:09:56.705 23:17:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:56.705 23:17:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:56.705 23:17:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:56.705 23:17:48 -- common/autotest_common.sh@861 -- # break 00:09:56.705 23:17:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:56.705 23:17:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:56.705 23:17:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.705 1+0 records in 00:09:56.705 1+0 records out 00:09:56.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761508 s, 5.4 MB/s 00:09:56.705 23:17:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.705 23:17:48 -- common/autotest_common.sh@874 -- # size=4096 00:09:56.705 23:17:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.705 23:17:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:56.705 23:17:48 -- common/autotest_common.sh@877 -- # return 0 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.705 23:17:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd0", 00:09:56.964 "bdev_name": "Nvme0n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd1", 00:09:56.964 "bdev_name": "Nvme1n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd10", 00:09:56.964 "bdev_name": "Nvme2n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd11", 00:09:56.964 "bdev_name": "Nvme2n2" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd12", 00:09:56.964 "bdev_name": "Nvme2n3" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd13", 00:09:56.964 "bdev_name": "Nvme3n1" 00:09:56.964 } 00:09:56.964 ]' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd0", 00:09:56.964 "bdev_name": "Nvme0n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd1", 00:09:56.964 "bdev_name": "Nvme1n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd10", 00:09:56.964 "bdev_name": "Nvme2n1" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd11", 00:09:56.964 "bdev_name": "Nvme2n2" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd12", 00:09:56.964 "bdev_name": "Nvme2n3" 00:09:56.964 }, 00:09:56.964 { 00:09:56.964 "nbd_device": "/dev/nbd13", 00:09:56.964 "bdev_name": "Nvme3n1" 00:09:56.964 } 00:09:56.964 ]' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:56.964 /dev/nbd1 00:09:56.964 /dev/nbd10 00:09:56.964 /dev/nbd11 00:09:56.964 /dev/nbd12 00:09:56.964 /dev/nbd13' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:56.964 /dev/nbd1 00:09:56.964 /dev/nbd10 00:09:56.964 /dev/nbd11 00:09:56.964 /dev/nbd12 00:09:56.964 /dev/nbd13' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@65 -- # count=6 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@95 -- # count=6 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:56.964 256+0 records in 00:09:56.964 256+0 records out 00:09:56.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115694 s, 90.6 MB/s 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.964 23:17:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:57.223 256+0 records in 00:09:57.223 256+0 records out 00:09:57.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132807 s, 7.9 MB/s 00:09:57.223 23:17:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.223 23:17:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:57.223 256+0 records in 00:09:57.223 256+0 records out 00:09:57.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127729 s, 8.2 MB/s 00:09:57.223 23:17:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.223 23:17:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:57.481 256+0 records in 00:09:57.481 256+0 records out 00:09:57.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128109 s, 8.2 MB/s 00:09:57.481 23:17:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.481 23:17:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:57.481 256+0 records in 00:09:57.481 256+0 records out 00:09:57.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129545 s, 8.1 MB/s 00:09:57.481 23:17:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.481 23:17:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:57.740 256+0 records in 00:09:57.740 256+0 records out 00:09:57.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126237 s, 8.3 MB/s 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:57.740 256+0 records in 00:09:57.740 256+0 records out 00:09:57.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131131 s, 8.0 MB/s 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.740 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@51 -- # local i 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.999 23:17:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@41 -- # break 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@41 -- # break 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.257 23:17:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@41 -- # break 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.516 23:17:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@41 -- # break 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.774 23:17:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@41 -- # break 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@41 -- # break 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.032 23:17:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.291 23:17:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@65 -- # true 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.291 23:17:51 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:59.291 23:17:51 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:59.550 malloc_lvol_verify 00:09:59.550 23:17:51 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:59.808 9f8f94eb-928e-4921-ae23-f35a3b4d20df 00:09:59.808 23:17:51 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:59.808 77292064-2456-42d6-8266-1b7d581cd05a 00:10:00.067 23:17:51 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:00.067 /dev/nbd0 00:10:00.067 23:17:51 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:00.067 mke2fs 1.46.5 (30-Dec-2021) 00:10:00.067 Discarding device blocks: 0/4096 done 00:10:00.067 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:00.067 00:10:00.068 Allocating group tables: 0/1 done 00:10:00.068 Writing inode tables: 0/1 done 00:10:00.068 Creating journal (1024 blocks): done 00:10:00.068 Writing superblocks and filesystem accounting information: 0/1 done 00:10:00.068 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@51 -- # local i 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.068 23:17:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@41 -- # break 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:00.326 23:17:51 -- bdev/nbd_common.sh@147 -- # return 0 00:10:00.326 23:17:51 -- bdev/blockdev.sh@324 -- # killprocess 61588 00:10:00.326 23:17:51 -- common/autotest_common.sh@926 -- # '[' -z 61588 ']' 00:10:00.326 23:17:51 -- common/autotest_common.sh@930 -- # kill -0 61588 00:10:00.326 23:17:51 -- common/autotest_common.sh@931 -- # uname 00:10:00.327 23:17:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:00.327 23:17:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61588 00:10:00.327 23:17:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:00.327 23:17:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:00.327 23:17:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61588' 00:10:00.327 killing process with pid 61588 00:10:00.327 23:17:51 -- common/autotest_common.sh@945 -- # kill 61588 00:10:00.327 23:17:51 -- common/autotest_common.sh@950 -- # wait 61588 00:10:01.704 23:17:53 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:01.704 00:10:01.704 real 0m10.767s 00:10:01.704 user 0m13.476s 00:10:01.704 sys 0m4.311s 00:10:01.704 23:17:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.704 ************************************ 00:10:01.704 END TEST bdev_nbd 00:10:01.704 ************************************ 00:10:01.704 23:17:53 -- common/autotest_common.sh@10 -- # set +x 00:10:01.704 23:17:53 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:01.704 23:17:53 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:01.704 skipping fio tests on NVMe due to multi-ns failures. 00:10:01.704 23:17:53 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:01.704 23:17:53 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:01.704 23:17:53 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:01.704 23:17:53 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:01.704 23:17:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.704 23:17:53 -- common/autotest_common.sh@10 -- # set +x 00:10:01.704 ************************************ 00:10:01.704 START TEST bdev_verify 00:10:01.704 ************************************ 00:10:01.704 23:17:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:01.704 [2024-07-26 23:17:53.430416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:01.704 [2024-07-26 23:17:53.430610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61969 ] 00:10:01.962 [2024-07-26 23:17:53.609952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.220 [2024-07-26 23:17:53.870091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.220 [2024-07-26 23:17:53.870120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.155 Running I/O for 5 seconds... 00:10:08.426 00:10:08.426 Latency(us) 00:10:08.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.426 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0xbd0bd 00:10:08.426 Nvme0n1 : 5.04 2993.39 11.69 0.00 0.00 42652.01 6395.68 60640.54 00:10:08.426 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:08.426 Nvme0n1 : 5.05 2753.35 10.76 0.00 0.00 46371.01 4921.78 69062.84 00:10:08.426 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0xa0000 00:10:08.426 Nvme1n1 : 5.04 2992.56 11.69 0.00 0.00 42632.27 6737.84 59377.20 00:10:08.426 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0xa0000 length 0xa0000 00:10:08.426 Nvme1n1 : 5.05 2752.79 10.75 0.00 0.00 46225.40 4421.71 58956.08 00:10:08.426 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0x80000 00:10:08.426 Nvme2n1 : 5.04 2991.75 11.69 0.00 0.00 42587.00 6579.92 55166.05 00:10:08.426 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x80000 length 0x80000 00:10:08.426 Nvme2n1 : 5.06 2759.35 10.78 0.00 0.00 46072.04 1921.34 58956.08 00:10:08.426 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0x80000 00:10:08.426 Nvme2n2 : 5.05 2990.92 11.68 0.00 0.00 42572.13 6790.48 55166.05 00:10:08.426 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x80000 length 0x80000 00:10:08.426 Nvme2n2 : 5.06 2758.84 10.78 0.00 0.00 46019.72 2276.65 59798.31 00:10:08.426 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0x80000 00:10:08.426 Nvme2n3 : 5.05 2990.13 11.68 0.00 0.00 42518.89 7001.03 53692.14 00:10:08.426 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x80000 length 0x80000 00:10:08.426 Nvme2n3 : 5.06 2758.33 10.77 0.00 0.00 45975.35 2618.81 60219.42 00:10:08.426 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x0 length 0x20000 00:10:08.426 Nvme3n1 : 5.05 2997.07 11.71 0.00 0.00 42374.51 480.33 50954.90 00:10:08.426 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.426 Verification LBA range: start 0x20000 length 0x20000 00:10:08.426 Nvme3n1 : 5.06 2757.81 10.77 0.00 0.00 45943.41 3026.76 61061.65 00:10:08.426 =================================================================================================================== 00:10:08.426 Total : 34496.29 134.75 0.00 0.00 44257.62 480.33 69062.84 00:10:16.542 00:10:16.542 real 0m14.793s 00:10:16.542 user 0m27.854s 00:10:16.542 sys 0m0.505s 00:10:16.542 23:18:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.542 23:18:08 -- common/autotest_common.sh@10 -- # set +x 00:10:16.542 ************************************ 00:10:16.542 END TEST bdev_verify 00:10:16.542 ************************************ 00:10:16.542 23:18:08 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:16.542 23:18:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:16.542 23:18:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.542 23:18:08 -- common/autotest_common.sh@10 -- # set +x 00:10:16.542 ************************************ 00:10:16.542 START TEST bdev_verify_big_io 00:10:16.542 ************************************ 00:10:16.542 23:18:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:16.542 [2024-07-26 23:18:08.292524] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:16.542 [2024-07-26 23:18:08.292661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:10:16.800 [2024-07-26 23:18:08.465758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.058 [2024-07-26 23:18:08.720442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.058 [2024-07-26 23:18:08.720471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.993 Running I/O for 5 seconds... 00:10:24.554 00:10:24.554 Latency(us) 00:10:24.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.554 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.554 Verification LBA range: start 0x0 length 0xbd0b 00:10:24.554 Nvme0n1 : 5.23 447.55 27.97 0.00 0.00 282864.94 9001.33 385741.21 00:10:24.554 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.554 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:24.554 Nvme0n1 : 5.29 229.59 14.35 0.00 0.00 548530.03 23477.15 805171.61 00:10:24.554 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.554 Verification LBA range: start 0x0 length 0xa000 00:10:24.555 Nvme1n1 : 5.23 447.38 27.96 0.00 0.00 280777.50 9106.61 353736.48 00:10:24.555 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0xa000 length 0xa000 00:10:24.555 Nvme1n1 : 5.29 229.51 14.34 0.00 0.00 536878.35 24003.55 707472.96 00:10:24.555 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x0 length 0x8000 00:10:24.555 Nvme2n1 : 5.24 455.22 28.45 0.00 0.00 275047.36 5921.93 323416.21 00:10:24.555 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x8000 length 0x8000 00:10:24.555 Nvme2n1 : 5.31 235.76 14.73 0.00 0.00 511431.81 23266.60 609774.32 00:10:24.555 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x0 length 0x8000 00:10:24.555 Nvme2n2 : 5.25 455.03 28.44 0.00 0.00 272989.16 7001.03 291411.48 00:10:24.555 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x8000 length 0x8000 00:10:24.555 Nvme2n2 : 5.39 265.51 16.59 0.00 0.00 447306.44 14212.63 535658.10 00:10:24.555 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x0 length 0x8000 00:10:24.555 Nvme2n3 : 5.25 454.87 28.43 0.00 0.00 270880.66 7053.67 261091.21 00:10:24.555 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x8000 length 0x8000 00:10:24.555 Nvme2n3 : 5.44 304.73 19.05 0.00 0.00 384801.43 8053.82 471648.64 00:10:24.555 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x0 length 0x2000 00:10:24.555 Nvme3n1 : 5.25 454.70 28.42 0.00 0.00 268792.59 7264.23 264460.13 00:10:24.555 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:24.555 Verification LBA range: start 0x2000 length 0x2000 00:10:24.555 Nvme3n1 : 5.49 361.43 22.59 0.00 0.00 320225.94 496.78 454804.05 00:10:24.555 =================================================================================================================== 00:10:24.555 Total : 4341.26 271.33 0.00 0.00 338826.55 496.78 805171.61 00:10:25.568 00:10:25.569 real 0m8.832s 00:10:25.569 user 0m16.034s 00:10:25.569 sys 0m0.420s 00:10:25.569 23:18:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.569 23:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:25.569 ************************************ 00:10:25.569 END TEST bdev_verify_big_io 00:10:25.569 ************************************ 00:10:25.569 23:18:17 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:25.569 23:18:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:25.569 23:18:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.569 23:18:17 -- common/autotest_common.sh@10 -- # set +x 00:10:25.569 ************************************ 00:10:25.569 START TEST bdev_write_zeroes 00:10:25.569 ************************************ 00:10:25.569 23:18:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:25.569 [2024-07-26 23:18:17.201577] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:25.569 [2024-07-26 23:18:17.201715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62264 ] 00:10:25.828 [2024-07-26 23:18:17.372720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.087 [2024-07-26 23:18:17.631782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.654 Running I/O for 1 seconds... 00:10:28.030 00:10:28.030 Latency(us) 00:10:28.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.030 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme0n1 : 1.01 13304.84 51.97 0.00 0.00 9593.04 7790.62 22213.81 00:10:28.030 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme1n1 : 1.01 13291.10 51.92 0.00 0.00 9592.31 8317.02 22213.81 00:10:28.030 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme2n1 : 1.01 13278.81 51.87 0.00 0.00 9580.13 7948.54 22319.09 00:10:28.030 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme2n2 : 1.02 13329.31 52.07 0.00 0.00 9493.53 4526.98 22740.20 00:10:28.030 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme2n3 : 1.02 13316.90 52.02 0.00 0.00 9491.38 4658.58 23687.71 00:10:28.030 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.030 Nvme3n1 : 1.02 13305.04 51.97 0.00 0.00 9480.35 4869.14 22845.48 00:10:28.030 =================================================================================================================== 00:10:28.030 Total : 79825.99 311.82 0.00 0.00 9538.22 4526.98 23687.71 00:10:29.409 00:10:29.409 real 0m3.645s 00:10:29.409 user 0m3.178s 00:10:29.409 sys 0m0.350s 00:10:29.409 23:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.409 23:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:29.409 ************************************ 00:10:29.409 END TEST bdev_write_zeroes 00:10:29.409 ************************************ 00:10:29.409 23:18:20 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.409 23:18:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:29.409 23:18:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.409 23:18:20 -- common/autotest_common.sh@10 -- # set +x 00:10:29.409 ************************************ 00:10:29.409 START TEST bdev_json_nonenclosed 00:10:29.409 ************************************ 00:10:29.409 23:18:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.409 [2024-07-26 23:18:20.922144] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:29.409 [2024-07-26 23:18:20.922261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:10:29.409 [2024-07-26 23:18:21.091958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.669 [2024-07-26 23:18:21.340766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.669 [2024-07-26 23:18:21.340999] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:29.669 [2024-07-26 23:18:21.341026] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:30.237 ************************************ 00:10:30.237 END TEST bdev_json_nonenclosed 00:10:30.237 ************************************ 00:10:30.237 00:10:30.237 real 0m0.965s 00:10:30.237 user 0m0.686s 00:10:30.237 sys 0m0.172s 00:10:30.237 23:18:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.237 23:18:21 -- common/autotest_common.sh@10 -- # set +x 00:10:30.237 23:18:21 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:30.237 23:18:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:30.237 23:18:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.237 23:18:21 -- common/autotest_common.sh@10 -- # set +x 00:10:30.237 ************************************ 00:10:30.237 START TEST bdev_json_nonarray 00:10:30.237 ************************************ 00:10:30.237 23:18:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:30.237 [2024-07-26 23:18:21.970372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:30.237 [2024-07-26 23:18:21.970488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62350 ] 00:10:30.496 [2024-07-26 23:18:22.139242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.756 [2024-07-26 23:18:22.392177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.756 [2024-07-26 23:18:22.392382] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:30.756 [2024-07-26 23:18:22.392416] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.324 ************************************ 00:10:31.324 END TEST bdev_json_nonarray 00:10:31.324 ************************************ 00:10:31.324 00:10:31.324 real 0m0.978s 00:10:31.324 user 0m0.699s 00:10:31.324 sys 0m0.173s 00:10:31.324 23:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.324 23:18:22 -- common/autotest_common.sh@10 -- # set +x 00:10:31.324 23:18:22 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:10:31.324 23:18:22 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:10:31.324 23:18:22 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:10:31.325 23:18:22 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:31.325 23:18:22 -- bdev/blockdev.sh@809 -- # cleanup 00:10:31.325 23:18:22 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:31.325 23:18:22 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:31.325 23:18:22 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:10:31.325 23:18:22 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:10:31.325 23:18:22 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:10:31.325 23:18:22 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:10:31.325 ************************************ 00:10:31.325 END TEST blockdev_nvme 00:10:31.325 ************************************ 00:10:31.325 00:10:31.325 real 0m51.293s 00:10:31.325 user 1m17.341s 00:10:31.325 sys 0m7.960s 00:10:31.325 23:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.325 23:18:22 -- common/autotest_common.sh@10 -- # set +x 00:10:31.325 23:18:23 -- spdk/autotest.sh@219 -- # uname -s 00:10:31.325 23:18:23 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:10:31.325 23:18:23 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:31.325 23:18:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:31.325 23:18:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.325 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.325 ************************************ 00:10:31.325 START TEST blockdev_nvme_gpt 00:10:31.325 ************************************ 00:10:31.325 23:18:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:31.584 * Looking for test storage... 00:10:31.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:31.584 23:18:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:31.584 23:18:23 -- bdev/nbd_common.sh@6 -- # set -e 00:10:31.584 23:18:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:31.584 23:18:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:31.584 23:18:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:31.584 23:18:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:31.584 23:18:23 -- bdev/blockdev.sh@18 -- # : 00:10:31.584 23:18:23 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:31.584 23:18:23 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:31.584 23:18:23 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:31.584 23:18:23 -- bdev/blockdev.sh@672 -- # uname -s 00:10:31.584 23:18:23 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:31.584 23:18:23 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:31.584 23:18:23 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:10:31.584 23:18:23 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:31.584 23:18:23 -- bdev/blockdev.sh@682 -- # dek= 00:10:31.584 23:18:23 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:31.584 23:18:23 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:31.584 23:18:23 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:31.584 23:18:23 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:10:31.584 23:18:23 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:10:31.584 23:18:23 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:31.584 23:18:23 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62425 00:10:31.584 23:18:23 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:31.584 23:18:23 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:31.584 23:18:23 -- bdev/blockdev.sh@47 -- # waitforlisten 62425 00:10:31.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.584 23:18:23 -- common/autotest_common.sh@819 -- # '[' -z 62425 ']' 00:10:31.584 23:18:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.584 23:18:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:31.584 23:18:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.584 23:18:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:31.584 23:18:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.584 [2024-07-26 23:18:23.273647] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:31.584 [2024-07-26 23:18:23.273985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62425 ] 00:10:31.843 [2024-07-26 23:18:23.443787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.102 [2024-07-26 23:18:23.701973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.102 [2024-07-26 23:18:23.702337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.008 23:18:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:34.008 23:18:25 -- common/autotest_common.sh@852 -- # return 0 00:10:34.008 23:18:25 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:34.008 23:18:25 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:10:34.008 23:18:25 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:34.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.575 Waiting for block devices as requested 00:10:34.575 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.833 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.833 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:35.092 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.368 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:40.368 23:18:31 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:10:40.368 23:18:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:10:40.368 23:18:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:10:40.368 23:18:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.368 23:18:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:10:40.368 23:18:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:40.368 23:18:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.368 23:18:31 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:10:40.368 23:18:31 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:10:40.368 23:18:31 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:10:40.368 23:18:31 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:40.368 23:18:31 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:10:40.368 23:18:31 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:10:40.368 23:18:31 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:10:40.368 23:18:31 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:10:40.368 BYT; 00:10:40.368 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:40.368 23:18:31 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:10:40.368 BYT; 00:10:40.368 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:40.368 23:18:31 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:10:40.368 23:18:31 -- bdev/blockdev.sh@114 -- # break 00:10:40.368 23:18:31 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:10:40.368 23:18:31 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:40.368 23:18:31 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:40.368 23:18:31 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:40.368 23:18:31 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:10:40.368 23:18:31 -- scripts/common.sh@410 -- # local spdk_guid 00:10:40.368 23:18:31 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:40.368 23:18:31 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.368 23:18:31 -- scripts/common.sh@415 -- # IFS='()' 00:10:40.368 23:18:31 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:10:40.368 23:18:31 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.368 23:18:31 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:40.368 23:18:31 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.368 23:18:31 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.368 23:18:31 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.368 23:18:31 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:10:40.368 23:18:31 -- scripts/common.sh@422 -- # local spdk_guid 00:10:40.368 23:18:31 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:40.368 23:18:31 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.368 23:18:31 -- scripts/common.sh@427 -- # IFS='()' 00:10:40.368 23:18:31 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:10:40.368 23:18:31 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.368 23:18:31 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:40.368 23:18:31 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.368 23:18:31 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.368 23:18:31 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.368 23:18:31 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:10:41.306 The operation has completed successfully. 00:10:41.306 23:18:32 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:10:42.242 The operation has completed successfully. 00:10:42.242 23:18:33 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.618 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.618 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.877 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.877 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.877 23:18:35 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:10:43.877 23:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.877 23:18:35 -- common/autotest_common.sh@10 -- # set +x 00:10:43.877 [] 00:10:43.877 23:18:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.877 23:18:35 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:10:43.877 23:18:35 -- bdev/blockdev.sh@79 -- # local json 00:10:43.877 23:18:35 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:43.877 23:18:35 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:44.136 23:18:35 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:10:44.136 23:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.136 23:18:35 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.395 23:18:35 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:44.395 23:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.395 23:18:35 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.395 23:18:35 -- bdev/blockdev.sh@738 -- # cat 00:10:44.395 23:18:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:44.395 23:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.395 23:18:35 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.395 23:18:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:44.395 23:18:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.395 23:18:35 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.395 23:18:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:44.395 23:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.395 23:18:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.395 23:18:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:44.395 23:18:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:44.395 23:18:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:44.395 23:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.395 23:18:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 23:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.655 23:18:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:44.655 23:18:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:44.656 23:18:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "83490b9f-2c42-411a-b3b8-725cefabd6db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "83490b9f-2c42-411a-b3b8-725cefabd6db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a246f73c-bb76-4362-a024-c087607056f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a246f73c-bb76-4362-a024-c087607056f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a8737c83-3648-48a7-82b1-fd19a678368c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8737c83-3648-48a7-82b1-fd19a678368c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "0df41751-8c52-4eb3-a631-581e45336262"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0df41751-8c52-4eb3-a631-581e45336262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e732de33-2b78-49cb-b9ab-02acb8caa955"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e732de33-2b78-49cb-b9ab-02acb8caa955",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:44.656 23:18:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:44.656 23:18:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:10:44.656 23:18:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:44.656 23:18:36 -- bdev/blockdev.sh@752 -- # killprocess 62425 00:10:44.656 23:18:36 -- common/autotest_common.sh@926 -- # '[' -z 62425 ']' 00:10:44.656 23:18:36 -- common/autotest_common.sh@930 -- # kill -0 62425 00:10:44.656 23:18:36 -- common/autotest_common.sh@931 -- # uname 00:10:44.656 23:18:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:44.656 23:18:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62425 00:10:44.656 killing process with pid 62425 00:10:44.656 23:18:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:44.656 23:18:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:44.656 23:18:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62425' 00:10:44.656 23:18:36 -- common/autotest_common.sh@945 -- # kill 62425 00:10:44.656 23:18:36 -- common/autotest_common.sh@950 -- # wait 62425 00:10:47.192 23:18:38 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:47.192 23:18:38 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:47.192 23:18:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:47.192 23:18:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.192 23:18:38 -- common/autotest_common.sh@10 -- # set +x 00:10:47.192 ************************************ 00:10:47.192 START TEST bdev_hello_world 00:10:47.192 ************************************ 00:10:47.192 23:18:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:47.192 [2024-07-26 23:18:38.550251] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:47.192 [2024-07-26 23:18:38.550352] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:10:47.192 [2024-07-26 23:18:38.718417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.192 [2024-07-26 23:18:38.929720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.129 [2024-07-26 23:18:39.602700] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:48.129 [2024-07-26 23:18:39.602748] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:48.129 [2024-07-26 23:18:39.602767] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:48.129 [2024-07-26 23:18:39.605595] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:48.129 [2024-07-26 23:18:39.606094] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:48.129 [2024-07-26 23:18:39.606121] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:48.129 [2024-07-26 23:18:39.606483] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:48.129 00:10:48.130 [2024-07-26 23:18:39.606512] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:49.067 00:10:49.067 real 0m2.303s 00:10:49.067 user 0m1.951s 00:10:49.067 sys 0m0.245s 00:10:49.067 ************************************ 00:10:49.067 END TEST bdev_hello_world 00:10:49.067 ************************************ 00:10:49.067 23:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.067 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:10:49.326 23:18:40 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:49.326 23:18:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:49.326 23:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.326 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:10:49.326 ************************************ 00:10:49.326 START TEST bdev_bounds 00:10:49.326 ************************************ 00:10:49.326 23:18:40 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:49.326 Process bdevio pid: 63156 00:10:49.326 23:18:40 -- bdev/blockdev.sh@288 -- # bdevio_pid=63156 00:10:49.327 23:18:40 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:49.327 23:18:40 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:49.327 23:18:40 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63156' 00:10:49.327 23:18:40 -- bdev/blockdev.sh@291 -- # waitforlisten 63156 00:10:49.327 23:18:40 -- common/autotest_common.sh@819 -- # '[' -z 63156 ']' 00:10:49.327 23:18:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.327 23:18:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:49.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.327 23:18:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.327 23:18:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:49.327 23:18:40 -- common/autotest_common.sh@10 -- # set +x 00:10:49.327 [2024-07-26 23:18:40.931308] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:49.327 [2024-07-26 23:18:40.931416] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63156 ] 00:10:49.586 [2024-07-26 23:18:41.105131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.586 [2024-07-26 23:18:41.319186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.586 [2024-07-26 23:18:41.319338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.586 [2024-07-26 23:18:41.319370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.963 23:18:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:50.963 23:18:42 -- common/autotest_common.sh@852 -- # return 0 00:10:50.963 23:18:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:50.963 I/O targets: 00:10:50.963 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:50.963 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:50.963 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:50.963 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.963 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.963 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:50.963 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:50.963 00:10:50.963 00:10:50.963 CUnit - A unit testing framework for C - Version 2.1-3 00:10:50.963 http://cunit.sourceforge.net/ 00:10:50.963 00:10:50.963 00:10:50.963 Suite: bdevio tests on: Nvme3n1 00:10:50.963 Test: blockdev write read block ...passed 00:10:50.963 Test: blockdev write zeroes read block ...passed 00:10:50.963 Test: blockdev write zeroes read no split ...passed 00:10:50.963 Test: blockdev write zeroes read split ...passed 00:10:50.963 Test: blockdev write zeroes read split partial ...passed 00:10:50.963 Test: blockdev reset ...[2024-07-26 23:18:42.563086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:50.963 [2024-07-26 23:18:42.567609] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:50.963 passed 00:10:50.963 Test: blockdev write read 8 blocks ...passed 00:10:50.963 Test: blockdev write read size > 128k ...passed 00:10:50.963 Test: blockdev write read invalid size ...passed 00:10:50.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.963 Test: blockdev write read max offset ...passed 00:10:50.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.963 Test: blockdev writev readv 8 blocks ...passed 00:10:50.963 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.963 Test: blockdev writev readv block ...passed 00:10:50.963 Test: blockdev writev readv size > 128k ...passed 00:10:50.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.963 Test: blockdev comparev and writev ...[2024-07-26 23:18:42.578835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27260a000 len:0x1000 00:10:50.963 [2024-07-26 23:18:42.578916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.963 passed 00:10:50.963 Test: blockdev nvme passthru rw ...passed 00:10:50.963 Test: blockdev nvme passthru vendor specific ...passed 00:10:50.963 Test: blockdev nvme admin passthru ...[2024-07-26 23:18:42.579985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.963 [2024-07-26 23:18:42.580027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:50.963 passed 00:10:50.963 Test: blockdev copy ...passed 00:10:50.963 Suite: bdevio tests on: Nvme2n3 00:10:50.963 Test: blockdev write read block ...passed 00:10:50.963 Test: blockdev write zeroes read block ...passed 00:10:50.963 Test: blockdev write zeroes read no split ...passed 00:10:50.963 Test: blockdev write zeroes read split ...passed 00:10:50.963 Test: blockdev write zeroes read split partial ...passed 00:10:50.963 Test: blockdev reset ...[2024-07-26 23:18:42.660498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:50.963 [2024-07-26 23:18:42.665336] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:50.963 passed 00:10:50.963 Test: blockdev write read 8 blocks ...passed 00:10:50.963 Test: blockdev write read size > 128k ...passed 00:10:50.963 Test: blockdev write read invalid size ...passed 00:10:50.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:50.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:50.963 Test: blockdev write read max offset ...passed 00:10:50.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:50.963 Test: blockdev writev readv 8 blocks ...passed 00:10:50.963 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.963 Test: blockdev writev readv block ...passed 00:10:50.963 Test: blockdev writev readv size > 128k ...passed 00:10:50.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.963 Test: blockdev comparev and writev ...[2024-07-26 23:18:42.676406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x251504000 len:0x1000 00:10:50.963 [2024-07-26 23:18:42.676610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:50.963 passed 00:10:50.963 Test: blockdev nvme passthru rw ...passed 00:10:50.963 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:18:42.678098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:50.963 [2024-07-26 23:18:42.678205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:50.963 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:50.963 passed 00:10:50.963 Test: blockdev copy ...passed 00:10:50.963 Suite: bdevio tests on: Nvme2n2 00:10:50.963 Test: blockdev write read block ...passed 00:10:50.963 Test: blockdev write zeroes read block ...passed 00:10:50.963 Test: blockdev write zeroes read no split ...passed 00:10:51.222 Test: blockdev write zeroes read split ...passed 00:10:51.222 Test: blockdev write zeroes read split partial ...passed 00:10:51.222 Test: blockdev reset ...[2024-07-26 23:18:42.756785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:51.222 [2024-07-26 23:18:42.761287] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.222 passed 00:10:51.222 Test: blockdev write read 8 blocks ...passed 00:10:51.222 Test: blockdev write read size > 128k ...passed 00:10:51.222 Test: blockdev write read invalid size ...passed 00:10:51.222 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.222 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.222 Test: blockdev write read max offset ...passed 00:10:51.222 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.222 Test: blockdev writev readv 8 blocks ...passed 00:10:51.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.223 Test: blockdev writev readv block ...passed 00:10:51.223 Test: blockdev writev readv size > 128k ...passed 00:10:51.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.223 Test: blockdev comparev and writev ...[2024-07-26 23:18:42.772092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x251504000 len:0x1000 00:10:51.223 [2024-07-26 23:18:42.772263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.223 passed 00:10:51.223 Test: blockdev nvme passthru rw ...passed 00:10:51.223 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:18:42.773626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.223 [2024-07-26 23:18:42.773789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:10:51.223 00:10:51.223 Test: blockdev nvme admin passthru ...passed 00:10:51.223 Test: blockdev copy ...passed 00:10:51.223 Suite: bdevio tests on: Nvme2n1 00:10:51.223 Test: blockdev write read block ...passed 00:10:51.223 Test: blockdev write zeroes read block ...passed 00:10:51.223 Test: blockdev write zeroes read no split ...passed 00:10:51.223 Test: blockdev write zeroes read split ...passed 00:10:51.223 Test: blockdev write zeroes read split partial ...passed 00:10:51.223 Test: blockdev reset ...[2024-07-26 23:18:42.852633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:51.223 [2024-07-26 23:18:42.857067] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.223 passed 00:10:51.223 Test: blockdev write read 8 blocks ...passed 00:10:51.223 Test: blockdev write read size > 128k ...passed 00:10:51.223 Test: blockdev write read invalid size ...passed 00:10:51.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.223 Test: blockdev write read max offset ...passed 00:10:51.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.223 Test: blockdev writev readv 8 blocks ...passed 00:10:51.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.223 Test: blockdev writev readv block ...passed 00:10:51.223 Test: blockdev writev readv size > 128k ...passed 00:10:51.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.223 Test: blockdev comparev and writev ...[2024-07-26 23:18:42.867882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28103c000 len:0x1000 00:10:51.223 [2024-07-26 23:18:42.868075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.223 passed 00:10:51.223 Test: blockdev nvme passthru rw ...passed 00:10:51.223 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:18:42.869487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.223 [2024-07-26 23:18:42.869634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:51.223 passed 00:10:51.223 Test: blockdev nvme admin passthru ...passed 00:10:51.223 Test: blockdev copy ...passed 00:10:51.223 Suite: bdevio tests on: Nvme1n1 00:10:51.223 Test: blockdev write read block ...passed 00:10:51.223 Test: blockdev write zeroes read block ...passed 00:10:51.223 Test: blockdev write zeroes read no split ...passed 00:10:51.223 Test: blockdev write zeroes read split ...passed 00:10:51.223 Test: blockdev write zeroes read split partial ...passed 00:10:51.223 Test: blockdev reset ...[2024-07-26 23:18:42.947368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:51.223 [2024-07-26 23:18:42.951722] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.223 passed 00:10:51.223 Test: blockdev write read 8 blocks ...passed 00:10:51.223 Test: blockdev write read size > 128k ...passed 00:10:51.223 Test: blockdev write read invalid size ...passed 00:10:51.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.223 Test: blockdev write read max offset ...passed 00:10:51.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.223 Test: blockdev writev readv 8 blocks ...passed 00:10:51.223 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.223 Test: blockdev writev readv block ...passed 00:10:51.223 Test: blockdev writev readv size > 128k ...passed 00:10:51.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.223 Test: blockdev comparev and writev ...[2024-07-26 23:18:42.962727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x281038000 len:0x1000 00:10:51.223 [2024-07-26 23:18:42.962912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.223 passed 00:10:51.223 Test: blockdev nvme passthru rw ...passed 00:10:51.223 Test: blockdev nvme passthru vendor specific ...[2024-07-26 23:18:42.964314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.223 [2024-07-26 23:18:42.964413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:51.223 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:51.223 passed 00:10:51.223 Test: blockdev copy ...passed 00:10:51.223 Suite: bdevio tests on: Nvme0n1p2 00:10:51.223 Test: blockdev write read block ...passed 00:10:51.223 Test: blockdev write zeroes read block ...passed 00:10:51.483 Test: blockdev write zeroes read no split ...passed 00:10:51.483 Test: blockdev write zeroes read split ...passed 00:10:51.483 Test: blockdev write zeroes read split partial ...passed 00:10:51.483 Test: blockdev reset ...[2024-07-26 23:18:43.047798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:51.483 [2024-07-26 23:18:43.052126] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.483 passed 00:10:51.483 Test: blockdev write read 8 blocks ...passed 00:10:51.483 Test: blockdev write read size > 128k ...passed 00:10:51.483 Test: blockdev write read invalid size ...passed 00:10:51.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.483 Test: blockdev write read max offset ...passed 00:10:51.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.483 Test: blockdev writev readv 8 blocks ...passed 00:10:51.483 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.483 Test: blockdev writev readv block ...passed 00:10:51.483 Test: blockdev writev readv size > 128k ...passed 00:10:51.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.483 Test: blockdev comparev and writev ...passed 00:10:51.483 Test: blockdev nvme passthru rw ...passed 00:10:51.483 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.483 Test: blockdev nvme admin passthru ...passed 00:10:51.483 Test: blockdev copy ...[2024-07-26 23:18:43.061201] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:51.483 separate metadata which is not supported yet. 00:10:51.483 passed 00:10:51.483 Suite: bdevio tests on: Nvme0n1p1 00:10:51.483 Test: blockdev write read block ...passed 00:10:51.483 Test: blockdev write zeroes read block ...passed 00:10:51.483 Test: blockdev write zeroes read no split ...passed 00:10:51.483 Test: blockdev write zeroes read split ...passed 00:10:51.483 Test: blockdev write zeroes read split partial ...passed 00:10:51.483 Test: blockdev reset ...[2024-07-26 23:18:43.131559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:51.483 [2024-07-26 23:18:43.135727] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.483 passed 00:10:51.483 Test: blockdev write read 8 blocks ...passed 00:10:51.483 Test: blockdev write read size > 128k ...passed 00:10:51.483 Test: blockdev write read invalid size ...passed 00:10:51.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.483 Test: blockdev write read max offset ...passed 00:10:51.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.483 Test: blockdev writev readv 8 blocks ...passed 00:10:51.483 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.483 Test: blockdev writev readv block ...passed 00:10:51.483 Test: blockdev writev readv size > 128k ...passed 00:10:51.483 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.483 Test: blockdev comparev and writev ...passed 00:10:51.483 Test: blockdev nvme passthru rw ...passed 00:10:51.483 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.483 Test: blockdev nvme admin passthru ...passed 00:10:51.483 Test: blockdev copy ...[2024-07-26 23:18:43.145434] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:51.483 separate metadata which is not supported yet. 00:10:51.483 passed 00:10:51.483 00:10:51.483 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.483 suites 7 7 n/a 0 0 00:10:51.483 tests 161 161 161 0 0 00:10:51.483 asserts 1006 1006 1006 0 n/a 00:10:51.483 00:10:51.483 Elapsed time = 1.783 seconds 00:10:51.483 0 00:10:51.483 23:18:43 -- bdev/blockdev.sh@293 -- # killprocess 63156 00:10:51.483 23:18:43 -- common/autotest_common.sh@926 -- # '[' -z 63156 ']' 00:10:51.483 23:18:43 -- common/autotest_common.sh@930 -- # kill -0 63156 00:10:51.483 23:18:43 -- common/autotest_common.sh@931 -- # uname 00:10:51.483 23:18:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:51.483 23:18:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63156 00:10:51.483 23:18:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:51.483 23:18:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:51.483 23:18:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63156' 00:10:51.483 killing process with pid 63156 00:10:51.483 23:18:43 -- common/autotest_common.sh@945 -- # kill 63156 00:10:51.483 23:18:43 -- common/autotest_common.sh@950 -- # wait 63156 00:10:52.419 23:18:44 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:52.419 00:10:52.419 real 0m3.276s 00:10:52.419 user 0m8.292s 00:10:52.419 sys 0m0.427s 00:10:52.419 23:18:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.419 ************************************ 00:10:52.419 END TEST bdev_bounds 00:10:52.419 ************************************ 00:10:52.419 23:18:44 -- common/autotest_common.sh@10 -- # set +x 00:10:52.679 23:18:44 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:52.679 23:18:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:52.679 23:18:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.679 23:18:44 -- common/autotest_common.sh@10 -- # set +x 00:10:52.679 ************************************ 00:10:52.679 START TEST bdev_nbd 00:10:52.679 ************************************ 00:10:52.679 23:18:44 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:52.679 23:18:44 -- bdev/blockdev.sh@298 -- # uname -s 00:10:52.679 23:18:44 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:52.679 23:18:44 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.679 23:18:44 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:52.679 23:18:44 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:52.679 23:18:44 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:52.679 23:18:44 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:10:52.679 23:18:44 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:52.679 23:18:44 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:52.679 23:18:44 -- bdev/blockdev.sh@309 -- # local nbd_all 00:10:52.679 23:18:44 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:10:52.679 23:18:44 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:52.679 23:18:44 -- bdev/blockdev.sh@312 -- # local nbd_list 00:10:52.679 23:18:44 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:52.679 23:18:44 -- bdev/blockdev.sh@313 -- # local bdev_list 00:10:52.679 23:18:44 -- bdev/blockdev.sh@316 -- # nbd_pid=63223 00:10:52.679 23:18:44 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:52.679 23:18:44 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:52.679 23:18:44 -- bdev/blockdev.sh@318 -- # waitforlisten 63223 /var/tmp/spdk-nbd.sock 00:10:52.679 23:18:44 -- common/autotest_common.sh@819 -- # '[' -z 63223 ']' 00:10:52.679 23:18:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.679 23:18:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.679 23:18:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.679 23:18:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:52.679 23:18:44 -- common/autotest_common.sh@10 -- # set +x 00:10:52.679 [2024-07-26 23:18:44.308147] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:52.679 [2024-07-26 23:18:44.308265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.976 [2024-07-26 23:18:44.481799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.976 [2024-07-26 23:18:44.692131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.383 23:18:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.383 23:18:45 -- common/autotest_common.sh@852 -- # return 0 00:10:54.383 23:18:45 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@24 -- # local i 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:54.383 23:18:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:54.383 23:18:45 -- common/autotest_common.sh@857 -- # local i 00:10:54.383 23:18:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:54.383 23:18:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:54.383 23:18:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:54.383 23:18:45 -- common/autotest_common.sh@861 -- # break 00:10:54.383 23:18:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:54.383 23:18:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:54.383 23:18:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.383 1+0 records in 00:10:54.383 1+0 records out 00:10:54.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851897 s, 4.8 MB/s 00:10:54.383 23:18:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.383 23:18:45 -- common/autotest_common.sh@874 -- # size=4096 00:10:54.383 23:18:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.383 23:18:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:54.383 23:18:45 -- common/autotest_common.sh@877 -- # return 0 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.383 23:18:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:54.642 23:18:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:54.642 23:18:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:54.642 23:18:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:54.642 23:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:54.642 23:18:46 -- common/autotest_common.sh@857 -- # local i 00:10:54.642 23:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:54.642 23:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:54.642 23:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:54.642 23:18:46 -- common/autotest_common.sh@861 -- # break 00:10:54.642 23:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:54.642 23:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:54.642 23:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.642 1+0 records in 00:10:54.642 1+0 records out 00:10:54.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062877 s, 6.5 MB/s 00:10:54.642 23:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.642 23:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:10:54.642 23:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.642 23:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:54.642 23:18:46 -- common/autotest_common.sh@877 -- # return 0 00:10:54.642 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.643 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.643 23:18:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:54.902 23:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:10:54.902 23:18:46 -- common/autotest_common.sh@857 -- # local i 00:10:54.902 23:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:10:54.902 23:18:46 -- common/autotest_common.sh@861 -- # break 00:10:54.902 23:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:54.902 1+0 records in 00:10:54.902 1+0 records out 00:10:54.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683665 s, 6.0 MB/s 00:10:54.902 23:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.902 23:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:10:54.902 23:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:54.902 23:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:54.902 23:18:46 -- common/autotest_common.sh@877 -- # return 0 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:54.902 23:18:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:54.902 23:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:10:54.902 23:18:46 -- common/autotest_common.sh@857 -- # local i 00:10:54.902 23:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:10:54.902 23:18:46 -- common/autotest_common.sh@861 -- # break 00:10:54.902 23:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:54.902 23:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.161 1+0 records in 00:10:55.161 1+0 records out 00:10:55.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689888 s, 5.9 MB/s 00:10:55.161 23:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.161 23:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.161 23:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.161 23:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.161 23:18:46 -- common/autotest_common.sh@877 -- # return 0 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:55.161 23:18:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:55.161 23:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:10:55.161 23:18:46 -- common/autotest_common.sh@857 -- # local i 00:10:55.161 23:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.161 23:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.161 23:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:10:55.161 23:18:46 -- common/autotest_common.sh@861 -- # break 00:10:55.161 23:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.161 23:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.161 23:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.161 1+0 records in 00:10:55.161 1+0 records out 00:10:55.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083109 s, 4.9 MB/s 00:10:55.161 23:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.161 23:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.161 23:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.421 23:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.421 23:18:46 -- common/autotest_common.sh@877 -- # return 0 00:10:55.421 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.421 23:18:46 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.421 23:18:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:55.421 23:18:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:10:55.421 23:18:47 -- common/autotest_common.sh@857 -- # local i 00:10:55.421 23:18:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.421 23:18:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.421 23:18:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:10:55.421 23:18:47 -- common/autotest_common.sh@861 -- # break 00:10:55.421 23:18:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.421 23:18:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.421 23:18:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.421 1+0 records in 00:10:55.421 1+0 records out 00:10:55.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817696 s, 5.0 MB/s 00:10:55.421 23:18:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.421 23:18:47 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.421 23:18:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.421 23:18:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.421 23:18:47 -- common/autotest_common.sh@877 -- # return 0 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.421 23:18:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:55.680 23:18:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:10:55.680 23:18:47 -- common/autotest_common.sh@857 -- # local i 00:10:55.680 23:18:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.680 23:18:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.680 23:18:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:10:55.680 23:18:47 -- common/autotest_common.sh@861 -- # break 00:10:55.680 23:18:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.680 23:18:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.680 23:18:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.680 1+0 records in 00:10:55.680 1+0 records out 00:10:55.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847949 s, 4.8 MB/s 00:10:55.680 23:18:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.680 23:18:47 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.680 23:18:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.680 23:18:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.680 23:18:47 -- common/autotest_common.sh@877 -- # return 0 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.680 23:18:47 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:55.938 23:18:47 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd0", 00:10:55.938 "bdev_name": "Nvme0n1p1" 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd1", 00:10:55.938 "bdev_name": "Nvme0n1p2" 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd2", 00:10:55.938 "bdev_name": "Nvme1n1" 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd3", 00:10:55.938 "bdev_name": "Nvme2n1" 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd4", 00:10:55.938 "bdev_name": "Nvme2n2" 00:10:55.938 }, 00:10:55.938 { 00:10:55.938 "nbd_device": "/dev/nbd5", 00:10:55.939 "bdev_name": "Nvme2n3" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd6", 00:10:55.939 "bdev_name": "Nvme3n1" 00:10:55.939 } 00:10:55.939 ]' 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd0", 00:10:55.939 "bdev_name": "Nvme0n1p1" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd1", 00:10:55.939 "bdev_name": "Nvme0n1p2" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd2", 00:10:55.939 "bdev_name": "Nvme1n1" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd3", 00:10:55.939 "bdev_name": "Nvme2n1" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd4", 00:10:55.939 "bdev_name": "Nvme2n2" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd5", 00:10:55.939 "bdev_name": "Nvme2n3" 00:10:55.939 }, 00:10:55.939 { 00:10:55.939 "nbd_device": "/dev/nbd6", 00:10:55.939 "bdev_name": "Nvme3n1" 00:10:55.939 } 00:10:55.939 ]' 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@51 -- # local i 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.939 23:18:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@41 -- # break 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.197 23:18:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:56.456 23:18:47 -- bdev/nbd_common.sh@41 -- # break 00:10:56.456 23:18:47 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.456 23:18:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.456 23:18:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@41 -- # break 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.456 23:18:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@41 -- # break 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.714 23:18:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@41 -- # break 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@45 -- # return 0 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:56.974 23:18:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@41 -- # break 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@41 -- # break 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.234 23:18:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@65 -- # true 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@65 -- # count=0 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@122 -- # count=0 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@127 -- # return 0 00:10:57.494 23:18:49 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@12 -- # local i 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.494 23:18:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:57.754 /dev/nbd0 00:10:57.754 23:18:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:57.754 23:18:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:57.754 23:18:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:57.754 23:18:49 -- common/autotest_common.sh@857 -- # local i 00:10:57.754 23:18:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:57.754 23:18:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:57.754 23:18:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:57.754 23:18:49 -- common/autotest_common.sh@861 -- # break 00:10:57.754 23:18:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:57.754 23:18:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:57.754 23:18:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.754 1+0 records in 00:10:57.754 1+0 records out 00:10:57.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072051 s, 5.7 MB/s 00:10:57.754 23:18:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.754 23:18:49 -- common/autotest_common.sh@874 -- # size=4096 00:10:57.754 23:18:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.754 23:18:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:57.754 23:18:49 -- common/autotest_common.sh@877 -- # return 0 00:10:57.754 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:57.754 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:57.754 23:18:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:58.014 /dev/nbd1 00:10:58.014 23:18:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:58.014 23:18:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:58.014 23:18:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:58.014 23:18:49 -- common/autotest_common.sh@857 -- # local i 00:10:58.014 23:18:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.014 23:18:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.014 23:18:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:58.014 23:18:49 -- common/autotest_common.sh@861 -- # break 00:10:58.015 23:18:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.015 23:18:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.015 23:18:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.015 1+0 records in 00:10:58.015 1+0 records out 00:10:58.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661722 s, 6.2 MB/s 00:10:58.015 23:18:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.015 23:18:49 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.015 23:18:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.015 23:18:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.015 23:18:49 -- common/autotest_common.sh@877 -- # return 0 00:10:58.015 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.015 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.015 23:18:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:58.274 /dev/nbd10 00:10:58.274 23:18:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:58.274 23:18:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:58.274 23:18:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:10:58.274 23:18:49 -- common/autotest_common.sh@857 -- # local i 00:10:58.274 23:18:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.274 23:18:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.274 23:18:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:10:58.274 23:18:49 -- common/autotest_common.sh@861 -- # break 00:10:58.274 23:18:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.274 23:18:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.274 23:18:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.274 1+0 records in 00:10:58.274 1+0 records out 00:10:58.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738271 s, 5.5 MB/s 00:10:58.274 23:18:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.274 23:18:49 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.274 23:18:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.274 23:18:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.274 23:18:49 -- common/autotest_common.sh@877 -- # return 0 00:10:58.274 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.274 23:18:49 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.274 23:18:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:58.533 /dev/nbd11 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:58.534 23:18:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:10:58.534 23:18:50 -- common/autotest_common.sh@857 -- # local i 00:10:58.534 23:18:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:10:58.534 23:18:50 -- common/autotest_common.sh@861 -- # break 00:10:58.534 23:18:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.534 1+0 records in 00:10:58.534 1+0 records out 00:10:58.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630663 s, 6.5 MB/s 00:10:58.534 23:18:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.534 23:18:50 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.534 23:18:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.534 23:18:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.534 23:18:50 -- common/autotest_common.sh@877 -- # return 0 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:58.534 /dev/nbd12 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:58.534 23:18:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:58.534 23:18:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:10:58.534 23:18:50 -- common/autotest_common.sh@857 -- # local i 00:10:58.534 23:18:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:10:58.534 23:18:50 -- common/autotest_common.sh@861 -- # break 00:10:58.534 23:18:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.534 23:18:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.534 1+0 records in 00:10:58.534 1+0 records out 00:10:58.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103399 s, 4.0 MB/s 00:10:58.534 23:18:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.793 23:18:50 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.794 23:18:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.794 23:18:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.794 23:18:50 -- common/autotest_common.sh@877 -- # return 0 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:58.794 /dev/nbd13 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:58.794 23:18:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:10:58.794 23:18:50 -- common/autotest_common.sh@857 -- # local i 00:10:58.794 23:18:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.794 23:18:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.794 23:18:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:10:58.794 23:18:50 -- common/autotest_common.sh@861 -- # break 00:10:58.794 23:18:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.794 23:18:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.794 23:18:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.794 1+0 records in 00:10:58.794 1+0 records out 00:10:58.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106436 s, 3.8 MB/s 00:10:58.794 23:18:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.794 23:18:50 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.794 23:18:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.794 23:18:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.794 23:18:50 -- common/autotest_common.sh@877 -- # return 0 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.794 23:18:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:59.053 /dev/nbd14 00:10:59.053 23:18:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:59.054 23:18:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:10:59.054 23:18:50 -- common/autotest_common.sh@857 -- # local i 00:10:59.054 23:18:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.054 23:18:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.054 23:18:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:10:59.054 23:18:50 -- common/autotest_common.sh@861 -- # break 00:10:59.054 23:18:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.054 23:18:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.054 23:18:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.054 1+0 records in 00:10:59.054 1+0 records out 00:10:59.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908553 s, 4.5 MB/s 00:10:59.054 23:18:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.054 23:18:50 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.054 23:18:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.054 23:18:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.054 23:18:50 -- common/autotest_common.sh@877 -- # return 0 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.054 23:18:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.313 23:18:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:59.313 { 00:10:59.313 "nbd_device": "/dev/nbd0", 00:10:59.313 "bdev_name": "Nvme0n1p1" 00:10:59.313 }, 00:10:59.313 { 00:10:59.313 "nbd_device": "/dev/nbd1", 00:10:59.313 "bdev_name": "Nvme0n1p2" 00:10:59.313 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd10", 00:10:59.314 "bdev_name": "Nvme1n1" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd11", 00:10:59.314 "bdev_name": "Nvme2n1" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd12", 00:10:59.314 "bdev_name": "Nvme2n2" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd13", 00:10:59.314 "bdev_name": "Nvme2n3" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd14", 00:10:59.314 "bdev_name": "Nvme3n1" 00:10:59.314 } 00:10:59.314 ]' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd0", 00:10:59.314 "bdev_name": "Nvme0n1p1" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd1", 00:10:59.314 "bdev_name": "Nvme0n1p2" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd10", 00:10:59.314 "bdev_name": "Nvme1n1" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd11", 00:10:59.314 "bdev_name": "Nvme2n1" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd12", 00:10:59.314 "bdev_name": "Nvme2n2" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd13", 00:10:59.314 "bdev_name": "Nvme2n3" 00:10:59.314 }, 00:10:59.314 { 00:10:59.314 "nbd_device": "/dev/nbd14", 00:10:59.314 "bdev_name": "Nvme3n1" 00:10:59.314 } 00:10:59.314 ]' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:59.314 /dev/nbd1 00:10:59.314 /dev/nbd10 00:10:59.314 /dev/nbd11 00:10:59.314 /dev/nbd12 00:10:59.314 /dev/nbd13 00:10:59.314 /dev/nbd14' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:59.314 /dev/nbd1 00:10:59.314 /dev/nbd10 00:10:59.314 /dev/nbd11 00:10:59.314 /dev/nbd12 00:10:59.314 /dev/nbd13 00:10:59.314 /dev/nbd14' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@65 -- # count=7 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@66 -- # echo 7 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@95 -- # count=7 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:59.314 256+0 records in 00:10:59.314 256+0 records out 00:10:59.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110494 s, 94.9 MB/s 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.314 23:18:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:59.573 256+0 records in 00:10:59.573 256+0 records out 00:10:59.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143132 s, 7.3 MB/s 00:10:59.573 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.573 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:59.573 256+0 records in 00:10:59.573 256+0 records out 00:10:59.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145445 s, 7.2 MB/s 00:10:59.573 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.574 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:59.833 256+0 records in 00:10:59.833 256+0 records out 00:10:59.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146336 s, 7.2 MB/s 00:10:59.833 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.833 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:59.833 256+0 records in 00:10:59.833 256+0 records out 00:10:59.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143662 s, 7.3 MB/s 00:10:59.833 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:59.833 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:00.092 256+0 records in 00:11:00.092 256+0 records out 00:11:00.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144155 s, 7.3 MB/s 00:11:00.092 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.092 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:00.353 256+0 records in 00:11:00.353 256+0 records out 00:11:00.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144755 s, 7.2 MB/s 00:11:00.353 23:18:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:00.353 256+0 records in 00:11:00.353 256+0 records out 00:11:00.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146548 s, 7.2 MB/s 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.353 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@51 -- # local i 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.613 23:18:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.873 23:18:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@41 -- # break 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.133 23:18:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.392 23:18:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:01.393 23:18:52 -- bdev/nbd_common.sh@41 -- # break 00:11:01.393 23:18:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.393 23:18:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.393 23:18:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.393 23:18:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.652 23:18:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.912 23:18:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@65 -- # true 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@65 -- # count=0 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@104 -- # count=0 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:02.171 23:18:53 -- bdev/nbd_common.sh@109 -- # return 0 00:11:02.172 23:18:53 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:02.172 23:18:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.172 23:18:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:02.172 23:18:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:02.172 23:18:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:02.172 23:18:53 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:02.431 malloc_lvol_verify 00:11:02.431 23:18:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:02.431 7a6866aa-ddae-4de1-8f82-4491fedfc2b3 00:11:02.431 23:18:54 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:02.690 c52dae0d-aeb0-4532-9c98-028b93c4cf02 00:11:02.690 23:18:54 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:02.950 /dev/nbd0 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:02.950 mke2fs 1.46.5 (30-Dec-2021) 00:11:02.950 Discarding device blocks: 0/4096 done 00:11:02.950 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:02.950 00:11:02.950 Allocating group tables: 0/1 done 00:11:02.950 Writing inode tables: 0/1 done 00:11:02.950 Creating journal (1024 blocks): done 00:11:02.950 Writing superblocks and filesystem accounting information: 0/1 done 00:11:02.950 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@51 -- # local i 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.950 23:18:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:03.208 23:18:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@41 -- # break 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:03.209 23:18:54 -- bdev/nbd_common.sh@147 -- # return 0 00:11:03.209 23:18:54 -- bdev/blockdev.sh@324 -- # killprocess 63223 00:11:03.209 23:18:54 -- common/autotest_common.sh@926 -- # '[' -z 63223 ']' 00:11:03.209 23:18:54 -- common/autotest_common.sh@930 -- # kill -0 63223 00:11:03.209 23:18:54 -- common/autotest_common.sh@931 -- # uname 00:11:03.209 23:18:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.209 23:18:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63223 00:11:03.209 23:18:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:03.209 killing process with pid 63223 00:11:03.209 23:18:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:03.209 23:18:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63223' 00:11:03.209 23:18:54 -- common/autotest_common.sh@945 -- # kill 63223 00:11:03.209 23:18:54 -- common/autotest_common.sh@950 -- # wait 63223 00:11:04.605 23:18:56 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:04.605 00:11:04.605 real 0m11.968s 00:11:04.605 user 0m14.797s 00:11:04.605 sys 0m4.900s 00:11:04.605 23:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.605 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 ************************************ 00:11:04.605 END TEST bdev_nbd 00:11:04.605 ************************************ 00:11:04.605 23:18:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:04.605 23:18:56 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:11:04.605 23:18:56 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:11:04.605 skipping fio tests on NVMe due to multi-ns failures. 00:11:04.605 23:18:56 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:04.605 23:18:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:04.605 23:18:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:04.605 23:18:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:04.605 23:18:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.605 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.605 ************************************ 00:11:04.605 START TEST bdev_verify 00:11:04.605 ************************************ 00:11:04.605 23:18:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:04.605 [2024-07-26 23:18:56.338195] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:04.605 [2024-07-26 23:18:56.338326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63647 ] 00:11:04.864 [2024-07-26 23:18:56.509424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:05.123 [2024-07-26 23:18:56.764500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.123 [2024-07-26 23:18:56.764535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.060 Running I/O for 5 seconds... 00:11:11.328 00:11:11.328 Latency(us) 00:11:11.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.328 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x5e800 00:11:11.328 Nvme0n1p1 : 5.06 1950.63 7.62 0.00 0.00 65394.46 10633.15 71589.53 00:11:11.328 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x5e800 length 0x5e800 00:11:11.328 Nvme0n1p1 : 5.07 1582.45 6.18 0.00 0.00 80560.25 8843.41 74958.44 00:11:11.328 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x5e7ff 00:11:11.328 Nvme0n1p2 : 5.07 1949.99 7.62 0.00 0.00 65359.60 8685.49 70326.18 00:11:11.328 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:11:11.328 Nvme0n1p2 : 5.08 1587.95 6.20 0.00 0.00 80262.62 6948.40 72431.76 00:11:11.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0xa0000 00:11:11.328 Nvme1n1 : 5.07 1949.50 7.62 0.00 0.00 65172.94 8843.41 67799.49 00:11:11.328 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0xa0000 length 0xa0000 00:11:11.328 Nvme1n1 : 5.08 1586.71 6.20 0.00 0.00 80222.43 8632.85 71589.53 00:11:11.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x80000 00:11:11.328 Nvme2n1 : 5.07 1948.39 7.61 0.00 0.00 65128.08 10212.04 69905.07 00:11:11.328 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x80000 length 0x80000 00:11:11.328 Nvme2n1 : 5.08 1586.35 6.20 0.00 0.00 80135.58 8317.02 72431.76 00:11:11.328 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x80000 00:11:11.328 Nvme2n2 : 5.08 1955.04 7.64 0.00 0.00 64928.24 3526.84 70326.18 00:11:11.328 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x80000 length 0x80000 00:11:11.328 Nvme2n2 : 5.08 1586.05 6.20 0.00 0.00 80044.29 7948.54 74537.33 00:11:11.328 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x80000 00:11:11.328 Nvme2n3 : 5.08 1954.43 7.63 0.00 0.00 64879.38 3895.31 66536.15 00:11:11.328 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x80000 length 0x80000 00:11:11.328 Nvme2n3 : 5.08 1585.77 6.19 0.00 0.00 79952.41 8264.38 75800.67 00:11:11.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x0 length 0x20000 00:11:11.328 Nvme3n1 : 5.08 1953.75 7.63 0.00 0.00 64819.94 4211.15 67378.38 00:11:11.328 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:11.328 Verification LBA range: start 0x20000 length 0x20000 00:11:11.328 Nvme3n1 : 5.09 1585.44 6.19 0.00 0.00 79895.42 8317.02 76642.90 00:11:11.328 =================================================================================================================== 00:11:11.328 Total : 24762.44 96.73 0.00 0.00 71853.23 3526.84 76642.90 00:11:13.232 00:11:13.232 real 0m8.394s 00:11:13.232 user 0m15.208s 00:11:13.232 sys 0m0.395s 00:11:13.232 23:19:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.232 ************************************ 00:11:13.232 END TEST bdev_verify 00:11:13.232 23:19:04 -- common/autotest_common.sh@10 -- # set +x 00:11:13.232 ************************************ 00:11:13.232 23:19:04 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:13.232 23:19:04 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:13.232 23:19:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.232 23:19:04 -- common/autotest_common.sh@10 -- # set +x 00:11:13.232 ************************************ 00:11:13.232 START TEST bdev_verify_big_io 00:11:13.232 ************************************ 00:11:13.232 23:19:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:13.232 [2024-07-26 23:19:04.818776] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:13.232 [2024-07-26 23:19:04.818888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63757 ] 00:11:13.490 [2024-07-26 23:19:04.989156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:13.749 [2024-07-26 23:19:05.251067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.749 [2024-07-26 23:19:05.251098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.686 Running I/O for 5 seconds... 00:11:21.257 00:11:21.257 Latency(us) 00:11:21.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.257 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.257 Verification LBA range: start 0x0 length 0x5e80 00:11:21.257 Nvme0n1p1 : 5.25 421.79 26.36 0.00 0.00 300162.41 24108.83 387425.67 00:11:21.257 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.257 Verification LBA range: start 0x5e80 length 0x5e80 00:11:21.257 Nvme0n1p1 : 5.31 228.50 14.28 0.00 0.00 552253.03 29688.60 798433.77 00:11:21.257 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.257 Verification LBA range: start 0x0 length 0x5e7f 00:11:21.257 Nvme0n1p2 : 5.25 421.63 26.35 0.00 0.00 298244.85 24424.66 363843.24 00:11:21.258 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x5e7f length 0x5e7f 00:11:21.258 Nvme0n1p2 : 5.31 228.42 14.28 0.00 0.00 542466.79 29688.60 714210.80 00:11:21.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x0 length 0xa000 00:11:21.258 Nvme1n1 : 5.25 421.46 26.34 0.00 0.00 296309.96 24740.50 343629.73 00:11:21.258 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0xa000 length 0xa000 00:11:21.258 Nvme1n1 : 5.32 228.35 14.27 0.00 0.00 532358.30 30320.27 633356.75 00:11:21.258 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x0 length 0x8000 00:11:21.258 Nvme2n1 : 5.26 428.42 26.78 0.00 0.00 290636.28 7211.59 308256.08 00:11:21.258 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x8000 length 0x8000 00:11:21.258 Nvme2n1 : 5.36 242.04 15.13 0.00 0.00 495166.79 17055.15 565978.37 00:11:21.258 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x0 length 0x8000 00:11:21.258 Nvme2n2 : 5.26 428.22 26.76 0.00 0.00 288750.42 8159.10 330154.05 00:11:21.258 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x8000 length 0x8000 00:11:21.258 Nvme2n2 : 5.41 264.13 16.51 0.00 0.00 447681.91 18002.66 603036.48 00:11:21.258 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x0 length 0x8000 00:11:21.258 Nvme2n3 : 5.27 428.06 26.75 0.00 0.00 286893.77 8790.77 294780.40 00:11:21.258 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x8000 length 0x8000 00:11:21.258 Nvme2n3 : 5.46 295.07 18.44 0.00 0.00 396732.64 7895.90 609774.32 00:11:21.258 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x0 length 0x2000 00:11:21.258 Nvme3n1 : 5.27 435.72 27.23 0.00 0.00 280422.03 1401.52 341945.27 00:11:21.258 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:21.258 Verification LBA range: start 0x2000 length 0x2000 00:11:21.258 Nvme3n1 : 5.50 346.98 21.69 0.00 0.00 333784.37 806.04 576085.13 00:11:21.258 =================================================================================================================== 00:11:21.258 Total : 4818.80 301.18 0.00 0.00 356071.04 806.04 798433.77 00:11:22.195 00:11:22.195 real 0m9.187s 00:11:22.195 user 0m16.721s 00:11:22.195 sys 0m0.445s 00:11:22.195 23:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.195 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 ************************************ 00:11:22.195 END TEST bdev_verify_big_io 00:11:22.195 ************************************ 00:11:22.483 23:19:13 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:22.483 23:19:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:22.483 23:19:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.483 23:19:13 -- common/autotest_common.sh@10 -- # set +x 00:11:22.483 ************************************ 00:11:22.483 START TEST bdev_write_zeroes 00:11:22.483 ************************************ 00:11:22.483 23:19:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:22.483 [2024-07-26 23:19:14.079624] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:22.483 [2024-07-26 23:19:14.079742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63878 ] 00:11:22.764 [2024-07-26 23:19:14.246380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.764 [2024-07-26 23:19:14.499270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.698 Running I/O for 1 seconds... 00:11:24.631 00:11:24.631 Latency(us) 00:11:24.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.631 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme0n1p1 : 1.01 10036.71 39.21 0.00 0.00 12712.14 10001.48 30320.27 00:11:24.632 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme0n1p2 : 1.02 10025.58 39.16 0.00 0.00 12706.43 9948.84 31162.50 00:11:24.632 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme1n1 : 1.02 10016.08 39.13 0.00 0.00 12674.64 10369.95 29056.93 00:11:24.632 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme2n1 : 1.02 10047.16 39.25 0.00 0.00 12602.47 8159.10 24951.06 00:11:24.632 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme2n2 : 1.02 10036.93 39.21 0.00 0.00 12596.00 8527.58 24635.22 00:11:24.632 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme2n3 : 1.03 10102.83 39.46 0.00 0.00 12421.55 4553.30 18318.50 00:11:24.632 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:24.632 Nvme3n1 : 1.03 10093.41 39.43 0.00 0.00 12412.37 4763.86 17792.10 00:11:24.632 =================================================================================================================== 00:11:24.632 Total : 70358.71 274.84 0.00 0.00 12588.47 4553.30 31162.50 00:11:26.011 00:11:26.011 real 0m3.538s 00:11:26.011 user 0m3.094s 00:11:26.011 sys 0m0.330s 00:11:26.011 23:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.011 23:19:17 -- common/autotest_common.sh@10 -- # set +x 00:11:26.011 ************************************ 00:11:26.011 END TEST bdev_write_zeroes 00:11:26.011 ************************************ 00:11:26.011 23:19:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:26.011 23:19:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:26.011 23:19:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.011 23:19:17 -- common/autotest_common.sh@10 -- # set +x 00:11:26.011 ************************************ 00:11:26.011 START TEST bdev_json_nonenclosed 00:11:26.011 ************************************ 00:11:26.011 23:19:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:26.011 [2024-07-26 23:19:17.698677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:26.011 [2024-07-26 23:19:17.698794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63931 ] 00:11:26.270 [2024-07-26 23:19:17.870868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.529 [2024-07-26 23:19:18.126871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.529 [2024-07-26 23:19:18.127068] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:26.529 [2024-07-26 23:19:18.127095] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:27.098 00:11:27.098 real 0m0.976s 00:11:27.098 user 0m0.698s 00:11:27.098 sys 0m0.171s 00:11:27.098 23:19:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.098 23:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.098 ************************************ 00:11:27.098 END TEST bdev_json_nonenclosed 00:11:27.098 ************************************ 00:11:27.098 23:19:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:27.098 23:19:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:27.098 23:19:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:27.098 23:19:18 -- common/autotest_common.sh@10 -- # set +x 00:11:27.098 ************************************ 00:11:27.098 START TEST bdev_json_nonarray 00:11:27.098 ************************************ 00:11:27.098 23:19:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:27.098 [2024-07-26 23:19:18.757595] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:27.098 [2024-07-26 23:19:18.757729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63962 ] 00:11:27.357 [2024-07-26 23:19:18.926742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.616 [2024-07-26 23:19:19.183519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.616 [2024-07-26 23:19:19.183724] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:27.616 [2024-07-26 23:19:19.183751] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:28.185 00:11:28.185 real 0m0.977s 00:11:28.185 user 0m0.694s 00:11:28.185 sys 0m0.178s 00:11:28.185 23:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.185 23:19:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.185 ************************************ 00:11:28.185 END TEST bdev_json_nonarray 00:11:28.185 ************************************ 00:11:28.185 23:19:19 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:11:28.185 23:19:19 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:11:28.185 23:19:19 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:28.185 23:19:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:28.185 23:19:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.185 23:19:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.185 ************************************ 00:11:28.185 START TEST bdev_gpt_uuid 00:11:28.185 ************************************ 00:11:28.185 23:19:19 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:11:28.185 23:19:19 -- bdev/blockdev.sh@612 -- # local bdev 00:11:28.185 23:19:19 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:11:28.185 23:19:19 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63993 00:11:28.185 23:19:19 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:28.185 23:19:19 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:28.185 23:19:19 -- bdev/blockdev.sh@47 -- # waitforlisten 63993 00:11:28.185 23:19:19 -- common/autotest_common.sh@819 -- # '[' -z 63993 ']' 00:11:28.185 23:19:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.185 23:19:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:28.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.185 23:19:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.185 23:19:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:28.185 23:19:19 -- common/autotest_common.sh@10 -- # set +x 00:11:28.185 [2024-07-26 23:19:19.829233] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:28.185 [2024-07-26 23:19:19.829354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63993 ] 00:11:28.444 [2024-07-26 23:19:19.998356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.703 [2024-07-26 23:19:20.263639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:28.703 [2024-07-26 23:19:20.263846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.608 23:19:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:30.608 23:19:21 -- common/autotest_common.sh@852 -- # return 0 00:11:30.608 23:19:21 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:30.608 23:19:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.608 23:19:21 -- common/autotest_common.sh@10 -- # set +x 00:11:30.608 Some configs were skipped because the RPC state that can call them passed over. 00:11:30.608 23:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.608 23:19:22 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:11:30.608 23:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.608 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:11:30.608 23:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.608 23:19:22 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:30.608 23:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.608 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:11:30.608 23:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.608 23:19:22 -- bdev/blockdev.sh@619 -- # bdev='[ 00:11:30.608 { 00:11:30.608 "name": "Nvme0n1p1", 00:11:30.608 "aliases": [ 00:11:30.608 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:30.608 ], 00:11:30.608 "product_name": "GPT Disk", 00:11:30.608 "block_size": 4096, 00:11:30.608 "num_blocks": 774144, 00:11:30.608 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:30.608 "md_size": 64, 00:11:30.608 "md_interleave": false, 00:11:30.608 "dif_type": 0, 00:11:30.608 "assigned_rate_limits": { 00:11:30.608 "rw_ios_per_sec": 0, 00:11:30.608 "rw_mbytes_per_sec": 0, 00:11:30.608 "r_mbytes_per_sec": 0, 00:11:30.608 "w_mbytes_per_sec": 0 00:11:30.608 }, 00:11:30.608 "claimed": false, 00:11:30.608 "zoned": false, 00:11:30.608 "supported_io_types": { 00:11:30.608 "read": true, 00:11:30.608 "write": true, 00:11:30.608 "unmap": true, 00:11:30.608 "write_zeroes": true, 00:11:30.608 "flush": true, 00:11:30.608 "reset": true, 00:11:30.608 "compare": true, 00:11:30.608 "compare_and_write": false, 00:11:30.608 "abort": true, 00:11:30.608 "nvme_admin": false, 00:11:30.608 "nvme_io": false 00:11:30.608 }, 00:11:30.608 "driver_specific": { 00:11:30.608 "gpt": { 00:11:30.608 "base_bdev": "Nvme0n1", 00:11:30.608 "offset_blocks": 256, 00:11:30.608 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:30.608 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:30.608 "partition_name": "SPDK_TEST_first" 00:11:30.608 } 00:11:30.608 } 00:11:30.608 } 00:11:30.608 ]' 00:11:30.608 23:19:22 -- bdev/blockdev.sh@620 -- # jq -r length 00:11:30.608 23:19:22 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:11:30.608 23:19:22 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:11:30.867 23:19:22 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:30.867 23:19:22 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:30.867 23:19:22 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:30.867 23:19:22 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:30.867 23:19:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.867 23:19:22 -- common/autotest_common.sh@10 -- # set +x 00:11:30.867 23:19:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.867 23:19:22 -- bdev/blockdev.sh@624 -- # bdev='[ 00:11:30.867 { 00:11:30.867 "name": "Nvme0n1p2", 00:11:30.867 "aliases": [ 00:11:30.867 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:30.867 ], 00:11:30.867 "product_name": "GPT Disk", 00:11:30.867 "block_size": 4096, 00:11:30.867 "num_blocks": 774143, 00:11:30.867 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:30.867 "md_size": 64, 00:11:30.867 "md_interleave": false, 00:11:30.867 "dif_type": 0, 00:11:30.867 "assigned_rate_limits": { 00:11:30.867 "rw_ios_per_sec": 0, 00:11:30.867 "rw_mbytes_per_sec": 0, 00:11:30.867 "r_mbytes_per_sec": 0, 00:11:30.867 "w_mbytes_per_sec": 0 00:11:30.867 }, 00:11:30.867 "claimed": false, 00:11:30.867 "zoned": false, 00:11:30.867 "supported_io_types": { 00:11:30.867 "read": true, 00:11:30.867 "write": true, 00:11:30.867 "unmap": true, 00:11:30.867 "write_zeroes": true, 00:11:30.868 "flush": true, 00:11:30.868 "reset": true, 00:11:30.868 "compare": true, 00:11:30.868 "compare_and_write": false, 00:11:30.868 "abort": true, 00:11:30.868 "nvme_admin": false, 00:11:30.868 "nvme_io": false 00:11:30.868 }, 00:11:30.868 "driver_specific": { 00:11:30.868 "gpt": { 00:11:30.868 "base_bdev": "Nvme0n1", 00:11:30.868 "offset_blocks": 774400, 00:11:30.868 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:30.868 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:30.868 "partition_name": "SPDK_TEST_second" 00:11:30.868 } 00:11:30.868 } 00:11:30.868 } 00:11:30.868 ]' 00:11:30.868 23:19:22 -- bdev/blockdev.sh@625 -- # jq -r length 00:11:30.868 23:19:22 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:11:30.868 23:19:22 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:11:30.868 23:19:22 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:30.868 23:19:22 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:30.868 23:19:22 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:30.868 23:19:22 -- bdev/blockdev.sh@629 -- # killprocess 63993 00:11:30.868 23:19:22 -- common/autotest_common.sh@926 -- # '[' -z 63993 ']' 00:11:30.868 23:19:22 -- common/autotest_common.sh@930 -- # kill -0 63993 00:11:30.868 23:19:22 -- common/autotest_common.sh@931 -- # uname 00:11:30.868 23:19:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:30.868 23:19:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63993 00:11:30.868 23:19:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:30.868 killing process with pid 63993 00:11:30.868 23:19:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:30.868 23:19:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63993' 00:11:30.868 23:19:22 -- common/autotest_common.sh@945 -- # kill 63993 00:11:30.868 23:19:22 -- common/autotest_common.sh@950 -- # wait 63993 00:11:33.403 00:11:33.403 real 0m5.354s 00:11:33.403 user 0m5.525s 00:11:33.403 sys 0m0.709s 00:11:33.403 23:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.403 23:19:25 -- common/autotest_common.sh@10 -- # set +x 00:11:33.403 ************************************ 00:11:33.403 END TEST bdev_gpt_uuid 00:11:33.403 ************************************ 00:11:33.403 23:19:25 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:11:33.403 23:19:25 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:33.403 23:19:25 -- bdev/blockdev.sh@809 -- # cleanup 00:11:33.403 23:19:25 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:33.403 23:19:25 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:33.403 23:19:25 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:11:33.403 23:19:25 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:11:33.403 23:19:25 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:11:33.403 23:19:25 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.341 Waiting for block devices as requested 00:11:34.601 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.601 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.601 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:34.860 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:40.136 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:40.136 23:19:31 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:11:40.136 23:19:31 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:11:40.136 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.136 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.136 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:40.136 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:11:40.136 23:19:31 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:11:40.136 00:11:40.136 real 1m8.815s 00:11:40.136 user 1m24.133s 00:11:40.136 sys 0m12.674s 00:11:40.136 23:19:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.136 23:19:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 ************************************ 00:11:40.136 END TEST blockdev_nvme_gpt 00:11:40.136 ************************************ 00:11:40.395 23:19:31 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:40.395 23:19:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:40.395 23:19:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.395 23:19:31 -- common/autotest_common.sh@10 -- # set +x 00:11:40.395 ************************************ 00:11:40.395 START TEST nvme 00:11:40.395 ************************************ 00:11:40.395 23:19:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:40.395 * Looking for test storage... 00:11:40.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:40.395 23:19:32 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:41.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:41.774 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.774 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.033 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.033 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:42.033 23:19:33 -- nvme/nvme.sh@79 -- # uname 00:11:42.033 23:19:33 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:42.033 23:19:33 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:42.033 23:19:33 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:42.033 23:19:33 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:42.033 23:19:33 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:11:42.033 23:19:33 -- common/autotest_common.sh@1045 -- # echo 0 00:11:42.033 Waiting for stub to ready for secondary processes... 00:11:42.033 23:19:33 -- common/autotest_common.sh@1047 -- # stubpid=64695 00:11:42.033 23:19:33 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:42.033 23:19:33 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:11:42.033 23:19:33 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:42.033 23:19:33 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64695 ]] 00:11:42.033 23:19:33 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:42.033 [2024-07-26 23:19:33.745667] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:42.033 [2024-07-26 23:19:33.745762] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.969 23:19:34 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:42.969 23:19:34 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64695 ]] 00:11:42.969 23:19:34 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:43.227 [2024-07-26 23:19:34.746187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.227 [2024-07-26 23:19:34.957931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.227 [2024-07-26 23:19:34.958133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.227 [2024-07-26 23:19:34.958158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.486 [2024-07-26 23:19:34.988899] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:43.486 [2024-07-26 23:19:35.007555] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:43.486 [2024-07-26 23:19:35.007771] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:43.486 [2024-07-26 23:19:35.021933] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:43.486 [2024-07-26 23:19:35.022123] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:43.486 [2024-07-26 23:19:35.022269] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:43.486 [2024-07-26 23:19:35.035530] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:43.486 [2024-07-26 23:19:35.035737] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:43.486 [2024-07-26 23:19:35.035889] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:43.486 [2024-07-26 23:19:35.049038] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:43.486 [2024-07-26 23:19:35.049244] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:43.486 [2024-07-26 23:19:35.049393] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:43.486 [2024-07-26 23:19:35.049579] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:43.486 [2024-07-26 23:19:35.049759] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:44.053 23:19:35 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:44.053 done. 00:11:44.053 23:19:35 -- common/autotest_common.sh@1054 -- # echo done. 00:11:44.053 23:19:35 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:44.053 23:19:35 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:44.053 23:19:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.053 23:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.053 ************************************ 00:11:44.053 START TEST nvme_reset 00:11:44.053 ************************************ 00:11:44.053 23:19:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:44.312 Initializing NVMe Controllers 00:11:44.312 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:44.312 Skipping QEMU NVMe SSD at 0000:00:07.0 00:11:44.312 Skipping QEMU NVMe SSD at 0000:00:09.0 00:11:44.312 Skipping QEMU NVMe SSD at 0000:00:08.0 00:11:44.312 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:44.312 00:11:44.312 real 0m0.281s 00:11:44.312 user 0m0.083s 00:11:44.312 sys 0m0.150s 00:11:44.312 23:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.312 ************************************ 00:11:44.312 END TEST nvme_reset 00:11:44.312 ************************************ 00:11:44.312 23:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:44.312 23:19:36 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:44.312 23:19:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.312 23:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.312 23:19:36 -- common/autotest_common.sh@10 -- # set +x 00:11:44.312 ************************************ 00:11:44.312 START TEST nvme_identify 00:11:44.312 ************************************ 00:11:44.312 23:19:36 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:44.312 23:19:36 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:44.312 23:19:36 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:44.312 23:19:36 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:44.312 23:19:36 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:44.312 23:19:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:44.312 23:19:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:44.312 23:19:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:44.313 23:19:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:44.313 23:19:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:44.571 23:19:36 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:44.571 23:19:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:44.571 23:19:36 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:44.832 ===================================================== 00:11:44.832 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:44.832 ===================================================== 00:11:44.832 Controller Capabilities/Features 00:11:44.832 ================================ 00:11:44.832 Vendor ID: 1b36 00:11:44.832 Subsystem Vendor ID: 1af4 00:11:44.832 Serial Number: 12340 00:11:44.832 Model Number: QEMU NVMe Ctrl 00:11:44.832 Firmware Version: 8.0.0 00:11:44.832 Recommended Arb Burst: 6 00:11:44.832 IEEE OUI Identifier: 00 54 52 00:11:44.832 Multi-path I/O 00:11:44.832 May have multiple subsystem ports: No 00:11:44.832 May have multiple controllers: No 00:11:44.832 Associated with SR-IOV VF: No 00:11:44.832 Max Data Transfer Size: 524288 00:11:44.832 Max Number of Namespaces: 256 00:11:44.832 Max Number of I/O Queues: 64 00:11:44.832 NVMe Specification Version (VS): 1.4 00:11:44.832 NVMe Specification Version (Identify): 1.4 00:11:44.832 Maximum Queue Entries: 2048 00:11:44.832 Contiguous Queues Required: Yes 00:11:44.832 Arbitration Mechanisms Supported 00:11:44.832 Weighted Round Robin: Not Supported 00:11:44.832 Vendor Specific: Not Supported 00:11:44.832 Reset Timeout: 7500 ms 00:11:44.832 Doorbell Stride: 4 bytes 00:11:44.832 NVM Subsystem Reset: Not Supported 00:11:44.832 Command Sets Supported 00:11:44.832 NVM Command Set: Supported 00:11:44.832 Boot Partition: Not Supported 00:11:44.832 Memory Page Size Minimum: 4096 bytes 00:11:44.832 Memory Page Size Maximum: 65536 bytes 00:11:44.832 Persistent Memory Region: Not Supported 00:11:44.832 Optional Asynchronous Events Supported 00:11:44.832 Namespace Attribute Notices: Supported 00:11:44.832 Firmware Activation Notices: Not Supported 00:11:44.832 ANA Change Notices: Not Supported 00:11:44.832 PLE Aggregate Log Change Notices: Not Supported 00:11:44.832 LBA Status Info Alert Notices: Not Supported 00:11:44.832 EGE Aggregate Log Change Notices: Not Supported 00:11:44.832 Normal NVM Subsystem Shutdown event: Not Supported 00:11:44.832 Zone Descriptor Change Notices: Not Supported 00:11:44.832 Discovery Log Change Notices: Not Supported 00:11:44.832 Controller Attributes 00:11:44.832 128-bit Host Identifier: Not Supported 00:11:44.832 Non-Operational Permissive Mode: Not Supported 00:11:44.832 NVM Sets: Not Supported 00:11:44.832 Read Recovery Levels: Not Supported 00:11:44.832 Endurance Groups: Not Supported 00:11:44.832 Predictable Latency Mode: Not Supported 00:11:44.832 Traffic Based Keep ALive: Not Supported 00:11:44.832 Namespace Granularity: Not Supported 00:11:44.832 SQ Associations: Not Supported 00:11:44.832 UUID List: Not Supported 00:11:44.832 Multi-Domain Subsystem: Not Supported 00:11:44.832 Fixed Capacity Management: Not Supported 00:11:44.832 Variable Capacity Management: Not Supported 00:11:44.832 Delete Endurance Group: Not Supported 00:11:44.832 Delete NVM Set: Not Supported 00:11:44.832 Extended LBA Formats Supported: Supported 00:11:44.832 Flexible Data Placement Supported: Not Supported 00:11:44.832 00:11:44.832 Controller Memory Buffer Support 00:11:44.832 ================================ 00:11:44.832 Supported: No 00:11:44.832 00:11:44.832 Persistent Memory Region Support 00:11:44.832 ================================ 00:11:44.832 Supported: No 00:11:44.832 00:11:44.832 Admin Command Set Attributes 00:11:44.832 ============================ 00:11:44.832 Security Send/Receive: Not Supported 00:11:44.832 Format NVM: Supported 00:11:44.832 Firmware Activate/Download: Not Supported 00:11:44.832 Namespace Management: Supported 00:11:44.832 Device Self-Test: Not Supported 00:11:44.832 Directives: Supported 00:11:44.832 NVMe-MI: Not Supported 00:11:44.832 Virtualization Management: Not Supported 00:11:44.832 Doorbell Buffer Config: Supported 00:11:44.832 Get LBA Status Capability: Not Supported 00:11:44.832 Command & Feature Lockdown Capability: Not Supported 00:11:44.832 Abort Command Limit: 4 00:11:44.832 Async Event Request Limit: 4 00:11:44.832 Number of Firmware Slots: N/A 00:11:44.832 Firmware Slot 1 Read-Only: N/A 00:11:44.833 Firmware Activation Without Reset: N/A 00:11:44.833 Multiple Update Detection Support: N/A 00:11:44.833 Firmware Update Granularity: No Information Provided 00:11:44.833 Per-Namespace SMART Log: Yes 00:11:44.833 Asymmetric Namespace Access Log Page: Not Supported 00:11:44.833 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:44.833 Command Effects Log Page: Supported 00:11:44.833 Get Log Page Extended Data: Supported 00:11:44.833 Telemetry Log Pages: Not Supported 00:11:44.833 Persistent Event Log Pages: Not Supported 00:11:44.833 Supported Log Pages Log Page: May Support 00:11:44.833 Commands Supported & Effects Log Page: Not Supported 00:11:44.833 Feature Identifiers & Effects Log Page:May Support 00:11:44.833 NVMe-MI Commands & Effects Log Page: May Support 00:11:44.833 Data Area 4 for Telemetry Log: Not Supported 00:11:44.833 Error Log Page Entries Supported: 1 00:11:44.833 Keep Alive: Not Supported 00:11:44.833 00:11:44.833 NVM Command Set Attributes 00:11:44.833 ========================== 00:11:44.833 Submission Queue Entry Size 00:11:44.833 Max: 64 00:11:44.833 Min: 64 00:11:44.833 Completion Queue Entry Size 00:11:44.833 Max: 16 00:11:44.833 Min: 16 00:11:44.833 Number of Namespaces: 256 00:11:44.833 Compare Command: Supported 00:11:44.833 Write Uncorrectable Command: Not Supported 00:11:44.833 Dataset Management Command: Supported 00:11:44.833 Write Zeroes Command: Supported 00:11:44.833 Set Features Save Field: Supported 00:11:44.833 Reservations: Not Supported 00:11:44.833 Timestamp: Supported 00:11:44.833 Copy: Supported 00:11:44.833 Volatile Write Cache: Present 00:11:44.833 Atomic Write Unit (Normal): 1 00:11:44.833 Atomic Write Unit (PFail): 1 00:11:44.833 Atomic Compare & Write Unit: 1 00:11:44.833 Fused Compare & Write: Not Supported 00:11:44.833 Scatter-Gather List 00:11:44.833 SGL Command Set: Supported 00:11:44.833 SGL Keyed: Not Supported 00:11:44.833 SGL Bit Bucket Descriptor: Not Supported 00:11:44.833 SGL Metadata Pointer: Not Supported 00:11:44.833 Oversized SGL: Not Supported 00:11:44.833 SGL Metadata Address: Not Supported 00:11:44.833 SGL Offset: Not Supported 00:11:44.833 Transport SGL Data Block: Not Supported 00:11:44.833 Replay Protected Memory Block: Not Supported 00:11:44.833 00:11:44.833 Firmware Slot Information 00:11:44.833 ========================= 00:11:44.833 Active slot: 1 00:11:44.833 Slot 1 Firmware Revision: 1.0 00:11:44.833 00:11:44.833 00:11:44.833 Commands Supported and Effects 00:11:44.833 ============================== 00:11:44.833 Admin Commands 00:11:44.833 -------------- 00:11:44.833 Delete I/O Submission Queue (00h): Supported 00:11:44.833 Create I/O Submission Queue (01h): Supported 00:11:44.833 Get Log Page (02h): Supported 00:11:44.833 Delete I/O Completion Queue (04h): Supported 00:11:44.833 Create I/O Completion Queue (05h): Supported 00:11:44.833 Identify (06h): Supported 00:11:44.833 Abort (08h): Supported 00:11:44.833 Set Features (09h): Supported 00:11:44.833 Get Features (0Ah): Supported 00:11:44.833 Asynchronous Event Request (0Ch): Supported 00:11:44.833 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:44.833 Directive Send (19h): Supported 00:11:44.833 Directive Receive (1Ah): Supported 00:11:44.833 Virtualization Management (1Ch): Supported 00:11:44.833 Doorbell Buffer Config (7Ch): Supported 00:11:44.833 Format NVM (80h): Supported LBA-Change 00:11:44.833 I/O Commands 00:11:44.833 ------------ 00:11:44.833 Flush (00h): Supported LBA-Change 00:11:44.833 Write (01h): Supported LBA-Change 00:11:44.833 Read (02h): Supported 00:11:44.833 Compare (05h): Supported 00:11:44.833 Write Zeroes (08h): Supported LBA-Change 00:11:44.833 Dataset Management (09h): Supported LBA-Change 00:11:44.833 Unknown (0Ch): Supported 00:11:44.833 Unknown (12h): Supported 00:11:44.833 Copy (19h): Supported LBA-Change 00:11:44.833 Unknown (1Dh): Supported LBA-Change 00:11:44.833 00:11:44.833 Error Log 00:11:44.833 ========= 00:11:44.833 00:11:44.833 Arbitration 00:11:44.833 =========== 00:11:44.833 Arbitration Burst: no limit 00:11:44.833 00:11:44.833 Power Management 00:11:44.833 ================ 00:11:44.833 Number of Power States: 1 00:11:44.833 Current Power State: Power State #0 00:11:44.833 Power State #0: 00:11:44.833 Max Power: 25.00 W 00:11:44.833 Non-Operational State: Operational 00:11:44.833 Entry Latency: 16 microseconds 00:11:44.833 Exit Latency: 4 microseconds 00:11:44.833 Relative Read Throughput: 0 00:11:44.833 Relative Read Latency: 0 00:11:44.833 Relative Write Throughput: 0 00:11:44.833 Relative Write Latency: 0 00:11:44.833 Idle Power: Not Reported 00:11:44.833 Active Power: Not Reported 00:11:44.833 Non-Operational Permissive Mode: Not Supported 00:11:44.833 00:11:44.833 Health Information 00:11:44.833 ================== 00:11:44.833 Critical Warnings: 00:11:44.833 Available Spare Space: OK 00:11:44.833 Temperature: OK 00:11:44.833 Device Reliability: OK 00:11:44.833 Read Only: No 00:11:44.833 Volatile Memory Backup: OK 00:11:44.833 Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.833 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:44.833 Available Spare: 0% 00:11:44.833 Available Spare Threshold: 0% 00:11:44.833 Life Percentage Used: 0% 00:11:44.833 Data Units Read: 2029 00:11:44.833 Data Units Written: 939 00:11:44.833 Host Read Commands: 77799 00:11:44.833 Host Write Commands: 38601 00:11:44.833 Controller Busy Time: 0 minutes 00:11:44.833 Power Cycles: 0 00:11:44.833 Power On Hours: 0 hours 00:11:44.833 Unsafe Shutdowns: 0 00:11:44.833 Unrecoverable Media Errors: 0 00:11:44.833 Lifetime Error Log Entries: 0 00:11:44.833 Warning Temperature Time: 0 minutes 00:11:44.833 Critical Temperature Time: 0 minutes 00:11:44.833 00:11:44.833 Number of Queues 00:11:44.833 ================ 00:11:44.833 Number of I/O Submission Queues: 64 00:11:44.833 Number of I/O Completion Queues: 64 00:11:44.833 00:11:44.833 ZNS Specific Controller Data 00:11:44.833 ============================ 00:11:44.833 Zone Append Size Limit: 0 00:11:44.833 00:11:44.833 00:11:44.833 Active Namespaces 00:11:44.833 ================= 00:11:44.833 Namespace ID:1 00:11:44.833 Error Recovery Timeout: Unlimited 00:11:44.833 Command Set Identifier: NVM (00h) 00:11:44.833 Deallocate: Supported 00:11:44.833 Deallocated/Unwritten Error: Supported 00:11:44.833 Deallocated Read Value: All 0x00 00:11:44.833 Deallocate in Write Zeroes: Not Supported 00:11:44.833 Deallocated Guard Field: 0xFFFF 00:11:44.833 Flush: Supported 00:11:44.833 Reservation: Not Supported 00:11:44.833 Metadata Transferred as: Separate Metadata Buffer 00:11:44.833 Namespace Sharing Capabilities: Private 00:11:44.833 Size (in LBAs): 1548666 (5GiB) 00:11:44.833 Capacity (in LBAs): 1548666 (5GiB) 00:11:44.833 Utilization (in LBAs): 1548666 (5GiB) 00:11:44.833 Thin Provisioning: Not Supported 00:11:44.833 Per-NS Atomic Units: No 00:11:44.833 Maximum Single Source Range Length: 128 00:11:44.833 Maximum Copy Length: 128 00:11:44.833 Maximum Source Range Count: 128 00:11:44.833 NGUID/EUI64 Never Reused: No 00:11:44.833 Namespace Write Protected: No 00:11:44.833 Number of LBA Formats: 8 00:11:44.833 Current LBA Format: LBA Format #07 00:11:44.833 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.833 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.833 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.833 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.833 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.833 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.833 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:44.833 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.833 00:11:44.833 ===================================================== 00:11:44.833 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:44.833 ===================================================== 00:11:44.833 Controller Capabilities/Features 00:11:44.833 ================================ 00:11:44.833 Vendor ID: 1b36 00:11:44.833 Subsystem Vendor ID: 1af4 00:11:44.833 Serial Number: 12341 00:11:44.833 Model Number: QEMU NVMe Ctrl 00:11:44.833 Firmware Version: 8.0.0 00:11:44.833 Recommended Arb Burst: 6 00:11:44.834 IEEE OUI Identifier: 00 54 52 00:11:44.834 Multi-path I/O 00:11:44.834 May have multiple subsystem ports: No 00:11:44.834 May have multiple controllers: No 00:11:44.834 Associated with SR-IOV VF: No 00:11:44.834 Max Data Transfer Size: 524288 00:11:44.834 Max Number of Namespaces: 256 00:11:44.834 Max Number of I/O Queues: 64 00:11:44.834 NVMe Specification Version (VS): 1.4 00:11:44.834 NVMe Specification Version (Identify): 1.4 00:11:44.834 Maximum Queue Entries: 2048 00:11:44.834 Contiguous Queues Required: Yes 00:11:44.834 Arbitration Mechanisms Supported 00:11:44.834 Weighted Round Robin: Not Supported 00:11:44.834 Vendor Specific: Not Supported 00:11:44.834 Reset Timeout: 7500 ms 00:11:44.834 Doorbell Stride: 4 bytes 00:11:44.834 NVM Subsystem Reset: Not Supported 00:11:44.834 Command Sets Supported 00:11:44.834 NVM Command Set: Supported 00:11:44.834 Boot Partition: Not Supported 00:11:44.834 Memory Page Size Minimum: 4096 bytes 00:11:44.834 Memory Page Size Maximum: 65536 bytes 00:11:44.834 Persistent Memory Region: Not Supported 00:11:44.834 Optional Asynchronous Events Supported 00:11:44.834 Namespace Attribute Notices: Supported 00:11:44.834 Firmware Activation Notices: Not Supported 00:11:44.834 ANA Change Notices: Not Supported 00:11:44.834 PLE Aggregate Log Change Notices: Not Supported 00:11:44.834 LBA Status Info Alert Notices: Not Supported 00:11:44.834 EGE Aggregate Log Change Notices: Not Supported 00:11:44.834 Normal NVM Subsystem Shutdown event: Not Supported 00:11:44.834 Zone Descriptor Change Notices: Not Supported 00:11:44.834 Discovery Log Change Notices: Not Supported 00:11:44.834 Controller Attributes 00:11:44.834 128-bit Host Identifier: Not Supported 00:11:44.834 Non-Operational Permissive Mode: Not Supported 00:11:44.834 NVM Sets: Not Supported 00:11:44.834 Read Recovery Levels: Not Supported 00:11:44.834 Endurance Groups: Not Supported 00:11:44.834 Predictable Latency Mode: Not Supported 00:11:44.834 Traffic Based Keep ALive: Not Supported 00:11:44.834 Namespace Granularity: Not Supported 00:11:44.834 SQ Associations: Not Supported 00:11:44.834 UUID List: Not Supported 00:11:44.834 Multi-Domain Subsystem: Not Supported 00:11:44.834 Fixed Capacity Management: Not Supported 00:11:44.834 Variable Capacity Management: Not Supported 00:11:44.834 Delete Endurance Group: Not Supported 00:11:44.834 Delete NVM Set: Not Supported 00:11:44.834 Extended LBA Formats Supported: Supported 00:11:44.834 Flexible Data Placement Supported: Not Supported 00:11:44.834 00:11:44.834 Controller Memory Buffer Support 00:11:44.834 ================================ 00:11:44.834 Supported: No 00:11:44.834 00:11:44.834 Persistent Memory Region Support 00:11:44.834 ================================ 00:11:44.834 Supported: No 00:11:44.834 00:11:44.834 Admin Command Set Attributes 00:11:44.834 ============================ 00:11:44.834 Security Send/Receive: Not Supported 00:11:44.834 Format NVM: Supported 00:11:44.834 Firmware Activate/Download: Not Supported 00:11:44.834 Namespace Management: Supported 00:11:44.834 Device Self-Test: Not Supported 00:11:44.834 Directives: Supported 00:11:44.834 NVMe-MI: Not Supported 00:11:44.834 Virtualization Management: Not Supported 00:11:44.834 Doorbell Buffer Config: Supported 00:11:44.834 Get LBA Status Capability: Not Supported 00:11:44.834 Command & Feature Lockdown Capability: Not Supported 00:11:44.834 Abort Command Limit: 4 00:11:44.834 Async Event Request Limit: 4 00:11:44.834 Number of Firmware Slots: N/A 00:11:44.834 Firmware Slot 1 Read-Only: N/A 00:11:44.834 Firmware Activation Without Reset: N/A 00:11:44.834 Multiple Update Detection Support: N/A 00:11:44.834 Firmware Update Granularity: No Information Provided 00:11:44.834 Per-Namespace SMART Log: Yes 00:11:44.834 Asymmetric Namespace Access Log Page: Not Supported 00:11:44.834 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:44.834 Command Effects Log Page: Supported 00:11:44.834 Get Log Page Extended Data: Supported 00:11:44.834 Telemetry Log Pages: Not Supported 00:11:44.834 Persistent Event Log Pages: Not Supported 00:11:44.834 Supported Log Pages Log Page: May Support 00:11:44.834 Commands Supported & Effects Log Page: Not Supported 00:11:44.834 Feature Identifiers & Effects Log Page:May Support 00:11:44.834 NVMe-MI Commands & Effects Log Page: May Support 00:11:44.834 Data Area 4 for Telemetry Log: Not Supported 00:11:44.834 Error Log Page Entries Supported: 1 00:11:44.834 Keep Alive: Not Supported 00:11:44.834 00:11:44.834 NVM Command Set Attributes 00:11:44.834 ========================== 00:11:44.834 Submission Queue Entry Size 00:11:44.834 Max: 64 00:11:44.834 Min: 64 00:11:44.834 Completion Queue Entry Size 00:11:44.834 Max: 16 00:11:44.834 Min: 16 00:11:44.834 Number of Namespaces: 256 00:11:44.834 Compare Command: Supported 00:11:44.834 Write Uncorrectable Command: Not Supported 00:11:44.834 Dataset Management Command: Supported 00:11:44.834 Write Zeroes Command: Supported 00:11:44.834 Set Features Save Field: Supported 00:11:44.834 Reservations: Not Supported 00:11:44.834 Timestamp: Supported 00:11:44.834 Copy: Supported 00:11:44.834 Volatile Write Cache: Present 00:11:44.834 Atomic Write Unit (Normal): 1 00:11:44.834 Atomic Write Unit (PFail): 1 00:11:44.834 Atomic Compare & Write Unit: 1 00:11:44.834 Fused Compare & Write: Not Supported 00:11:44.834 Scatter-Gather List 00:11:44.834 SGL Command Set: Supported 00:11:44.834 SGL Keyed: Not Supported 00:11:44.834 SGL Bit Bucket Descriptor: Not Supported 00:11:44.834 SGL Metadata Pointer: Not Supported 00:11:44.834 Oversized SGL: Not Supported 00:11:44.834 SGL Metadata Address: Not Supported 00:11:44.834 SGL Offset: Not Supported 00:11:44.834 Transport SGL Data Block: Not Supported 00:11:44.834 Replay Protected Memory Block: Not Supported 00:11:44.834 00:11:44.834 Firmware Slot Information 00:11:44.834 ========================= 00:11:44.834 Active slot: 1 00:11:44.834 Slot 1 Firmware Revision: 1.0 00:11:44.834 00:11:44.834 00:11:44.834 Commands Supported and Effects 00:11:44.834 ============================== 00:11:44.834 Admin Commands 00:11:44.834 -------------- 00:11:44.834 Delete I/O Submission Queue (00h): Supported 00:11:44.834 Create I/O Submission Queue (01h): Supported 00:11:44.834 Get Log Page (02h): Supported 00:11:44.834 Delete I/O Completion Queue (04h): Supported 00:11:44.834 Create I/O Completion Queue (05h): Supported 00:11:44.834 Identify (06h): Supported 00:11:44.834 Abort (08h): Supported 00:11:44.834 Set Features (09h): Supported 00:11:44.834 Get Features (0Ah): Supported 00:11:44.834 Asynchronous Event Request (0Ch): Supported 00:11:44.834 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:44.834 Directive Send (19h): Supported 00:11:44.834 Directive Receive (1Ah): Supported 00:11:44.834 Virtualization Management (1Ch): Supported 00:11:44.834 Doorbell Buffer Config (7Ch): Supported 00:11:44.834 Format NVM (80h): Supported LBA-Change 00:11:44.834 I/O Commands 00:11:44.834 ------------ 00:11:44.834 Flush (00h): Supported LBA-Change 00:11:44.834 Write (01h): Supported LBA-Change 00:11:44.834 Read (02h): Supported 00:11:44.834 Compare (05h): Supported 00:11:44.834 Write Zeroes (08h): Supported LBA-Change 00:11:44.834 Dataset Management (09h): Supported LBA-Change 00:11:44.834 Unknown (0Ch): Supported 00:11:44.834 Unknown (12h): Supported 00:11:44.834 Copy (19h): Supported LBA-Change 00:11:44.834 Unknown (1Dh): Supported LBA-Change 00:11:44.834 00:11:44.834 Error Log 00:11:44.834 ========= 00:11:44.834 00:11:44.834 Arbitration 00:11:44.834 =========== 00:11:44.834 Arbitration Burst: no limit 00:11:44.834 00:11:44.834 Power Management 00:11:44.834 ================ 00:11:44.834 Number of Power States: 1 00:11:44.835 Current Power State: Power State #0 00:11:44.835 Power State #0: 00:11:44.835 Max Power: 25.00 W 00:11:44.835 Non-Operational State: Operational 00:11:44.835 Entry Latency: 16 microseconds 00:11:44.835 Exit Latency: 4 microseconds 00:11:44.835 Relative Read Throughput: 0 00:11:44.835 Relative Read Latency: 0 00:11:44.835 Relative Write Throughput: 0 00:11:44.835 Relative Write Latency: 0 00:11:44.835 Idle Power: Not Reported 00:11:44.835 Active Power: Not Reported 00:11:44.835 Non-Operational Permissive Mode: Not Supported 00:11:44.835 00:11:44.835 Health Information 00:11:44.835 ================== 00:11:44.835 Critical Warnings: 00:11:44.835 Available Spare Space: OK 00:11:44.835 Temperature: OK 00:11:44.835 Device Reliability: OK 00:11:44.835 Read Only: No 00:11:44.835 Volatile Memory Backup: OK 00:11:44.835 Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.835 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:44.835 Available Spare: 0% 00:11:44.835 Available Spare Threshold: 0% 00:11:44.835 Life Percentage Used: 0% 00:11:44.835 Data Units Read: 1379 00:11:44.835 Data Units Written: 644 00:11:44.835 Host Read Commands: 55731 00:11:44.835 Host Write Commands: 27420 00:11:44.835 Controller Busy Time: 0 minutes 00:11:44.835 Power Cycles: 0 00:11:44.835 Power On Hours: 0 hours 00:11:44.835 Unsafe Shutdowns: 0 00:11:44.835 Unrecoverable Media Errors: 0 00:11:44.835 Lifetime Error Log Entries: 0 00:11:44.835 Warning Temperature Time: 0 minutes 00:11:44.835 Critical Temperature Time: 0 minutes 00:11:44.835 00:11:44.835 Number of Queues 00:11:44.835 ================ 00:11:44.835 Number of I/O Submission Queues: 64 00:11:44.835 Number of I/O Completion Queues: 64 00:11:44.835 00:11:44.835 ZNS Specific Controller Data 00:11:44.835 ============================ 00:11:44.835 Zone Append Size Limit: 0 00:11:44.835 00:11:44.835 00:11:44.835 Active Namespaces 00:11:44.835 ================= 00:11:44.835 Namespace ID:1 00:11:44.835 Error Recovery Timeout: Unlimited 00:11:44.835 Command Set Identifier: NVM (00h) 00:11:44.835 Deallocate: Supported 00:11:44.835 Deallocated/Unwritten Error: Supported 00:11:44.835 Deallocated Read Value: All 0x00 00:11:44.835 Deallocate in Write Zeroes: Not Supported 00:11:44.835 Deallocated Guard Field: 0xFFFF 00:11:44.835 Flush: Supported 00:11:44.835 Reservation: Not Supported 00:11:44.835 Namespace Sharing Capabilities: Private 00:11:44.835 Size (in LBAs): 1310720 (5GiB) 00:11:44.835 Capacity (in LBAs): 1310720 (5GiB) 00:11:44.835 Utilization (in LBAs): 1310720 (5GiB) 00:11:44.835 Thin Provisioning: Not Supported 00:11:44.835 Per-NS Atomic Units: No 00:11:44.835 Maximum Single Source Range Length: 128 00:11:44.835 Maximum Copy Length: 128 00:11:44.835 Maximum Source Range Count: 128 00:11:44.835 NGUID/EUI64 Never Reused: No 00:11:44.835 Namespace Write Protected: No 00:11:44.835 Number of LBA Formats: 8 00:11:44.835 Current LBA Format: LBA Format #04 00:11:44.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.835 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.835 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.835 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.835 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.835 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.835 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:44.835 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.835 00:11:44.835 ===================================================== 00:11:44.835 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:44.835 ===================================================== 00:11:44.835 Controller Capabilities/Features 00:11:44.835 ================================ 00:11:44.835 Vendor ID: 1b36 00:11:44.835 Subsystem Vendor ID: 1af4 00:11:44.835 Serial Number: 12343 00:11:44.835 Model Number: QEMU NVMe Ctrl 00:11:44.835 Firmware Version: 8.0.0 00:11:44.835 Recommended Arb Burst: 6 00:11:44.835 IEEE OUI Identifier: 00 54 52 00:11:44.835 Multi-path I/O 00:11:44.835 May have multiple subsystem ports: No 00:11:44.835 May have multiple controllers: Yes 00:11:44.835 Associated with SR-IOV VF: No 00:11:44.835 Max Data Transfer Size: 524288 00:11:44.835 Max Number of Namespaces: 256 00:11:44.835 Max Number of I/O Queues: 64 00:11:44.835 NVMe Specification Version (VS): 1.4 00:11:44.835 NVMe Specification Version (Identify): 1.4 00:11:44.835 Maximum Queue Entries: 2048 00:11:44.835 Contiguous Queues Required: Yes 00:11:44.835 Arbitration Mechanisms Supported 00:11:44.835 Weighted Round Robin: Not Supported 00:11:44.835 Vendor Specific: Not Supported 00:11:44.835 Reset Timeout: 7500 ms 00:11:44.835 Doorbell Stride: 4 bytes 00:11:44.835 NVM Subsystem Reset: Not Supported 00:11:44.835 Command Sets Supported 00:11:44.835 NVM Command Set: Supported 00:11:44.835 Boot Partition: Not Supported 00:11:44.835 Memory Page Size Minimum: 4096 bytes 00:11:44.835 Memory Page Size Maximum: 65536 bytes 00:11:44.835 Persistent Memory Region: Not Supported 00:11:44.835 Optional Asynchronous Events Supported 00:11:44.835 Namespace Attribute Notices: Supported 00:11:44.835 Firmware Activation Notices: Not Supported 00:11:44.835 ANA Change Notices: Not Supported 00:11:44.835 PLE Aggregate Log Change Notices: Not Supported 00:11:44.835 LBA Status Info Alert Notices: Not Supported 00:11:44.835 EGE Aggregate Log Change Notices: Not Supported 00:11:44.835 Normal NVM Subsystem Shutdown event: Not Supported 00:11:44.835 Zone Descriptor Change Notices: Not Supported 00:11:44.835 Discovery Log Change Notices: Not Supported 00:11:44.835 Controller Attributes 00:11:44.835 128-bit Host Identifier: Not Supported 00:11:44.835 Non-Operational Permissive Mode: Not Supported 00:11:44.835 NVM Sets: Not Supported 00:11:44.835 Read Recovery Levels: Not Supported 00:11:44.835 Endurance Groups: Supported 00:11:44.835 Predictable Latency Mode: Not Supported 00:11:44.835 Traffic Based Keep ALive: Not Supported 00:11:44.835 Namespace Granularity: Not Supported 00:11:44.835 SQ Associations: Not Supported 00:11:44.835 UUID List: Not Supported 00:11:44.835 Multi-Domain Subsystem: Not Supported 00:11:44.835 Fixed Capacity Mana[2024-07-26 23:19:36.426296] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64733 terminated unexpected 00:11:44.835 [2024-07-26 23:19:36.427363] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64733 terminated unexpected 00:11:44.835 [2024-07-26 23:19:36.428441] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64733 terminated unexpected 00:11:44.835 gement: Not Supported 00:11:44.835 Variable Capacity Management: Not Supported 00:11:44.835 Delete Endurance Group: Not Supported 00:11:44.835 Delete NVM Set: Not Supported 00:11:44.835 Extended LBA Formats Supported: Supported 00:11:44.835 Flexible Data Placement Supported: Supported 00:11:44.835 00:11:44.835 Controller Memory Buffer Support 00:11:44.835 ================================ 00:11:44.835 Supported: No 00:11:44.835 00:11:44.835 Persistent Memory Region Support 00:11:44.835 ================================ 00:11:44.835 Supported: No 00:11:44.835 00:11:44.835 Admin Command Set Attributes 00:11:44.835 ============================ 00:11:44.835 Security Send/Receive: Not Supported 00:11:44.835 Format NVM: Supported 00:11:44.835 Firmware Activate/Download: Not Supported 00:11:44.835 Namespace Management: Supported 00:11:44.835 Device Self-Test: Not Supported 00:11:44.835 Directives: Supported 00:11:44.835 NVMe-MI: Not Supported 00:11:44.836 Virtualization Management: Not Supported 00:11:44.836 Doorbell Buffer Config: Supported 00:11:44.836 Get LBA Status Capability: Not Supported 00:11:44.836 Command & Feature Lockdown Capability: Not Supported 00:11:44.836 Abort Command Limit: 4 00:11:44.836 Async Event Request Limit: 4 00:11:44.836 Number of Firmware Slots: N/A 00:11:44.836 Firmware Slot 1 Read-Only: N/A 00:11:44.836 Firmware Activation Without Reset: N/A 00:11:44.836 Multiple Update Detection Support: N/A 00:11:44.836 Firmware Update Granularity: No Information Provided 00:11:44.836 Per-Namespace SMART Log: Yes 00:11:44.836 Asymmetric Namespace Access Log Page: Not Supported 00:11:44.836 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:44.836 Command Effects Log Page: Supported 00:11:44.836 Get Log Page Extended Data: Supported 00:11:44.836 Telemetry Log Pages: Not Supported 00:11:44.836 Persistent Event Log Pages: Not Supported 00:11:44.836 Supported Log Pages Log Page: May Support 00:11:44.836 Commands Supported & Effects Log Page: Not Supported 00:11:44.836 Feature Identifiers & Effects Log Page:May Support 00:11:44.836 NVMe-MI Commands & Effects Log Page: May Support 00:11:44.836 Data Area 4 for Telemetry Log: Not Supported 00:11:44.836 Error Log Page Entries Supported: 1 00:11:44.836 Keep Alive: Not Supported 00:11:44.836 00:11:44.836 NVM Command Set Attributes 00:11:44.836 ========================== 00:11:44.836 Submission Queue Entry Size 00:11:44.836 Max: 64 00:11:44.836 Min: 64 00:11:44.836 Completion Queue Entry Size 00:11:44.836 Max: 16 00:11:44.836 Min: 16 00:11:44.836 Number of Namespaces: 256 00:11:44.836 Compare Command: Supported 00:11:44.836 Write Uncorrectable Command: Not Supported 00:11:44.836 Dataset Management Command: Supported 00:11:44.836 Write Zeroes Command: Supported 00:11:44.836 Set Features Save Field: Supported 00:11:44.836 Reservations: Not Supported 00:11:44.836 Timestamp: Supported 00:11:44.836 Copy: Supported 00:11:44.836 Volatile Write Cache: Present 00:11:44.836 Atomic Write Unit (Normal): 1 00:11:44.836 Atomic Write Unit (PFail): 1 00:11:44.836 Atomic Compare & Write Unit: 1 00:11:44.836 Fused Compare & Write: Not Supported 00:11:44.836 Scatter-Gather List 00:11:44.836 SGL Command Set: Supported 00:11:44.836 SGL Keyed: Not Supported 00:11:44.836 SGL Bit Bucket Descriptor: Not Supported 00:11:44.836 SGL Metadata Pointer: Not Supported 00:11:44.836 Oversized SGL: Not Supported 00:11:44.836 SGL Metadata Address: Not Supported 00:11:44.836 SGL Offset: Not Supported 00:11:44.836 Transport SGL Data Block: Not Supported 00:11:44.836 Replay Protected Memory Block: Not Supported 00:11:44.836 00:11:44.836 Firmware Slot Information 00:11:44.836 ========================= 00:11:44.836 Active slot: 1 00:11:44.836 Slot 1 Firmware Revision: 1.0 00:11:44.836 00:11:44.836 00:11:44.836 Commands Supported and Effects 00:11:44.836 ============================== 00:11:44.836 Admin Commands 00:11:44.836 -------------- 00:11:44.836 Delete I/O Submission Queue (00h): Supported 00:11:44.836 Create I/O Submission Queue (01h): Supported 00:11:44.836 Get Log Page (02h): Supported 00:11:44.836 Delete I/O Completion Queue (04h): Supported 00:11:44.836 Create I/O Completion Queue (05h): Supported 00:11:44.836 Identify (06h): Supported 00:11:44.836 Abort (08h): Supported 00:11:44.836 Set Features (09h): Supported 00:11:44.836 Get Features (0Ah): Supported 00:11:44.836 Asynchronous Event Request (0Ch): Supported 00:11:44.836 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:44.836 Directive Send (19h): Supported 00:11:44.836 Directive Receive (1Ah): Supported 00:11:44.836 Virtualization Management (1Ch): Supported 00:11:44.836 Doorbell Buffer Config (7Ch): Supported 00:11:44.836 Format NVM (80h): Supported LBA-Change 00:11:44.836 I/O Commands 00:11:44.836 ------------ 00:11:44.836 Flush (00h): Supported LBA-Change 00:11:44.836 Write (01h): Supported LBA-Change 00:11:44.836 Read (02h): Supported 00:11:44.836 Compare (05h): Supported 00:11:44.836 Write Zeroes (08h): Supported LBA-Change 00:11:44.836 Dataset Management (09h): Supported LBA-Change 00:11:44.836 Unknown (0Ch): Supported 00:11:44.836 Unknown (12h): Supported 00:11:44.836 Copy (19h): Supported LBA-Change 00:11:44.836 Unknown (1Dh): Supported LBA-Change 00:11:44.836 00:11:44.836 Error Log 00:11:44.836 ========= 00:11:44.836 00:11:44.836 Arbitration 00:11:44.836 =========== 00:11:44.836 Arbitration Burst: no limit 00:11:44.836 00:11:44.836 Power Management 00:11:44.836 ================ 00:11:44.836 Number of Power States: 1 00:11:44.836 Current Power State: Power State #0 00:11:44.836 Power State #0: 00:11:44.836 Max Power: 25.00 W 00:11:44.836 Non-Operational State: Operational 00:11:44.836 Entry Latency: 16 microseconds 00:11:44.836 Exit Latency: 4 microseconds 00:11:44.836 Relative Read Throughput: 0 00:11:44.836 Relative Read Latency: 0 00:11:44.836 Relative Write Throughput: 0 00:11:44.836 Relative Write Latency: 0 00:11:44.836 Idle Power: Not Reported 00:11:44.836 Active Power: Not Reported 00:11:44.836 Non-Operational Permissive Mode: Not Supported 00:11:44.836 00:11:44.836 Health Information 00:11:44.836 ================== 00:11:44.836 Critical Warnings: 00:11:44.836 Available Spare Space: OK 00:11:44.836 Temperature: OK 00:11:44.836 Device Reliability: OK 00:11:44.836 Read Only: No 00:11:44.836 Volatile Memory Backup: OK 00:11:44.836 Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.836 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:44.836 Available Spare: 0% 00:11:44.836 Available Spare Threshold: 0% 00:11:44.836 Life Percentage Used: 0% 00:11:44.836 Data Units Read: 1584 00:11:44.836 Data Units Written: 742 00:11:44.836 Host Read Commands: 57455 00:11:44.836 Host Write Commands: 28250 00:11:44.836 Controller Busy Time: 0 minutes 00:11:44.836 Power Cycles: 0 00:11:44.836 Power On Hours: 0 hours 00:11:44.836 Unsafe Shutdowns: 0 00:11:44.836 Unrecoverable Media Errors: 0 00:11:44.836 Lifetime Error Log Entries: 0 00:11:44.836 Warning Temperature Time: 0 minutes 00:11:44.836 Critical Temperature Time: 0 minutes 00:11:44.836 00:11:44.836 Number of Queues 00:11:44.836 ================ 00:11:44.836 Number of I/O Submission Queues: 64 00:11:44.836 Number of I/O Completion Queues: 64 00:11:44.836 00:11:44.836 ZNS Specific Controller Data 00:11:44.836 ============================ 00:11:44.836 Zone Append Size Limit: 0 00:11:44.836 00:11:44.836 00:11:44.836 Active Namespaces 00:11:44.836 ================= 00:11:44.836 Namespace ID:1 00:11:44.836 Error Recovery Timeout: Unlimited 00:11:44.836 Command Set Identifier: NVM (00h) 00:11:44.836 Deallocate: Supported 00:11:44.836 Deallocated/Unwritten Error: Supported 00:11:44.836 Deallocated Read Value: All 0x00 00:11:44.836 Deallocate in Write Zeroes: Not Supported 00:11:44.836 Deallocated Guard Field: 0xFFFF 00:11:44.836 Flush: Supported 00:11:44.836 Reservation: Not Supported 00:11:44.836 Namespace Sharing Capabilities: Multiple Controllers 00:11:44.836 Size (in LBAs): 262144 (1GiB) 00:11:44.836 Capacity (in LBAs): 262144 (1GiB) 00:11:44.836 Utilization (in LBAs): 262144 (1GiB) 00:11:44.836 Thin Provisioning: Not Supported 00:11:44.836 Per-NS Atomic Units: No 00:11:44.836 Maximum Single Source Range Length: 128 00:11:44.836 Maximum Copy Length: 128 00:11:44.836 Maximum Source Range Count: 128 00:11:44.836 NGUID/EUI64 Never Reused: No 00:11:44.836 Namespace Write Protected: No 00:11:44.836 Endurance group ID: 1 00:11:44.836 Number of LBA Formats: 8 00:11:44.837 Current LBA Format: LBA Format #04 00:11:44.837 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.837 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.837 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.837 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.837 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.837 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.837 LBA Format #06: Data Si[2024-07-26 23:19:36.430344] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64733 terminated unexpected 00:11:44.837 ze: 4096 Metadata Size: 16 00:11:44.837 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.837 00:11:44.837 Get Feature FDP: 00:11:44.837 ================ 00:11:44.837 Enabled: Yes 00:11:44.837 FDP configuration index: 0 00:11:44.837 00:11:44.837 FDP configurations log page 00:11:44.837 =========================== 00:11:44.837 Number of FDP configurations: 1 00:11:44.837 Version: 0 00:11:44.837 Size: 112 00:11:44.837 FDP Configuration Descriptor: 0 00:11:44.837 Descriptor Size: 96 00:11:44.837 Reclaim Group Identifier format: 2 00:11:44.837 FDP Volatile Write Cache: Not Present 00:11:44.837 FDP Configuration: Valid 00:11:44.837 Vendor Specific Size: 0 00:11:44.837 Number of Reclaim Groups: 2 00:11:44.837 Number of Recalim Unit Handles: 8 00:11:44.837 Max Placement Identifiers: 128 00:11:44.837 Number of Namespaces Suppprted: 256 00:11:44.837 Reclaim unit Nominal Size: 6000000 bytes 00:11:44.837 Estimated Reclaim Unit Time Limit: Not Reported 00:11:44.837 RUH Desc #000: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #001: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #002: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #003: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #004: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #005: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #006: RUH Type: Initially Isolated 00:11:44.837 RUH Desc #007: RUH Type: Initially Isolated 00:11:44.837 00:11:44.837 FDP reclaim unit handle usage log page 00:11:44.837 ====================================== 00:11:44.837 Number of Reclaim Unit Handles: 8 00:11:44.837 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:44.837 RUH Usage Desc #001: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #002: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #003: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #004: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #005: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #006: RUH Attributes: Unused 00:11:44.837 RUH Usage Desc #007: RUH Attributes: Unused 00:11:44.837 00:11:44.837 FDP statistics log page 00:11:44.837 ======================= 00:11:44.837 Host bytes with metadata written: 493535232 00:11:44.837 Media bytes with metadata written: 493678592 00:11:44.837 Media bytes erased: 0 00:11:44.837 00:11:44.837 FDP events log page 00:11:44.837 =================== 00:11:44.837 Number of FDP events: 0 00:11:44.837 00:11:44.837 ===================================================== 00:11:44.837 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:44.837 ===================================================== 00:11:44.837 Controller Capabilities/Features 00:11:44.837 ================================ 00:11:44.837 Vendor ID: 1b36 00:11:44.837 Subsystem Vendor ID: 1af4 00:11:44.837 Serial Number: 12342 00:11:44.837 Model Number: QEMU NVMe Ctrl 00:11:44.837 Firmware Version: 8.0.0 00:11:44.837 Recommended Arb Burst: 6 00:11:44.837 IEEE OUI Identifier: 00 54 52 00:11:44.837 Multi-path I/O 00:11:44.837 May have multiple subsystem ports: No 00:11:44.837 May have multiple controllers: No 00:11:44.837 Associated with SR-IOV VF: No 00:11:44.837 Max Data Transfer Size: 524288 00:11:44.837 Max Number of Namespaces: 256 00:11:44.837 Max Number of I/O Queues: 64 00:11:44.837 NVMe Specification Version (VS): 1.4 00:11:44.837 NVMe Specification Version (Identify): 1.4 00:11:44.837 Maximum Queue Entries: 2048 00:11:44.837 Contiguous Queues Required: Yes 00:11:44.837 Arbitration Mechanisms Supported 00:11:44.837 Weighted Round Robin: Not Supported 00:11:44.837 Vendor Specific: Not Supported 00:11:44.837 Reset Timeout: 7500 ms 00:11:44.837 Doorbell Stride: 4 bytes 00:11:44.837 NVM Subsystem Reset: Not Supported 00:11:44.837 Command Sets Supported 00:11:44.837 NVM Command Set: Supported 00:11:44.837 Boot Partition: Not Supported 00:11:44.837 Memory Page Size Minimum: 4096 bytes 00:11:44.837 Memory Page Size Maximum: 65536 bytes 00:11:44.837 Persistent Memory Region: Not Supported 00:11:44.837 Optional Asynchronous Events Supported 00:11:44.837 Namespace Attribute Notices: Supported 00:11:44.837 Firmware Activation Notices: Not Supported 00:11:44.837 ANA Change Notices: Not Supported 00:11:44.837 PLE Aggregate Log Change Notices: Not Supported 00:11:44.837 LBA Status Info Alert Notices: Not Supported 00:11:44.837 EGE Aggregate Log Change Notices: Not Supported 00:11:44.837 Normal NVM Subsystem Shutdown event: Not Supported 00:11:44.837 Zone Descriptor Change Notices: Not Supported 00:11:44.837 Discovery Log Change Notices: Not Supported 00:11:44.837 Controller Attributes 00:11:44.837 128-bit Host Identifier: Not Supported 00:11:44.837 Non-Operational Permissive Mode: Not Supported 00:11:44.837 NVM Sets: Not Supported 00:11:44.837 Read Recovery Levels: Not Supported 00:11:44.837 Endurance Groups: Not Supported 00:11:44.837 Predictable Latency Mode: Not Supported 00:11:44.837 Traffic Based Keep ALive: Not Supported 00:11:44.837 Namespace Granularity: Not Supported 00:11:44.837 SQ Associations: Not Supported 00:11:44.837 UUID List: Not Supported 00:11:44.837 Multi-Domain Subsystem: Not Supported 00:11:44.837 Fixed Capacity Management: Not Supported 00:11:44.837 Variable Capacity Management: Not Supported 00:11:44.837 Delete Endurance Group: Not Supported 00:11:44.837 Delete NVM Set: Not Supported 00:11:44.837 Extended LBA Formats Supported: Supported 00:11:44.837 Flexible Data Placement Supported: Not Supported 00:11:44.837 00:11:44.837 Controller Memory Buffer Support 00:11:44.837 ================================ 00:11:44.837 Supported: No 00:11:44.837 00:11:44.837 Persistent Memory Region Support 00:11:44.837 ================================ 00:11:44.837 Supported: No 00:11:44.837 00:11:44.837 Admin Command Set Attributes 00:11:44.837 ============================ 00:11:44.837 Security Send/Receive: Not Supported 00:11:44.837 Format NVM: Supported 00:11:44.837 Firmware Activate/Download: Not Supported 00:11:44.837 Namespace Management: Supported 00:11:44.837 Device Self-Test: Not Supported 00:11:44.837 Directives: Supported 00:11:44.837 NVMe-MI: Not Supported 00:11:44.837 Virtualization Management: Not Supported 00:11:44.837 Doorbell Buffer Config: Supported 00:11:44.837 Get LBA Status Capability: Not Supported 00:11:44.837 Command & Feature Lockdown Capability: Not Supported 00:11:44.837 Abort Command Limit: 4 00:11:44.837 Async Event Request Limit: 4 00:11:44.837 Number of Firmware Slots: N/A 00:11:44.837 Firmware Slot 1 Read-Only: N/A 00:11:44.838 Firmware Activation Without Reset: N/A 00:11:44.838 Multiple Update Detection Support: N/A 00:11:44.838 Firmware Update Granularity: No Information Provided 00:11:44.838 Per-Namespace SMART Log: Yes 00:11:44.838 Asymmetric Namespace Access Log Page: Not Supported 00:11:44.838 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:44.838 Command Effects Log Page: Supported 00:11:44.838 Get Log Page Extended Data: Supported 00:11:44.838 Telemetry Log Pages: Not Supported 00:11:44.838 Persistent Event Log Pages: Not Supported 00:11:44.838 Supported Log Pages Log Page: May Support 00:11:44.838 Commands Supported & Effects Log Page: Not Supported 00:11:44.838 Feature Identifiers & Effects Log Page:May Support 00:11:44.838 NVMe-MI Commands & Effects Log Page: May Support 00:11:44.838 Data Area 4 for Telemetry Log: Not Supported 00:11:44.838 Error Log Page Entries Supported: 1 00:11:44.838 Keep Alive: Not Supported 00:11:44.838 00:11:44.838 NVM Command Set Attributes 00:11:44.838 ========================== 00:11:44.838 Submission Queue Entry Size 00:11:44.838 Max: 64 00:11:44.838 Min: 64 00:11:44.838 Completion Queue Entry Size 00:11:44.838 Max: 16 00:11:44.838 Min: 16 00:11:44.838 Number of Namespaces: 256 00:11:44.838 Compare Command: Supported 00:11:44.838 Write Uncorrectable Command: Not Supported 00:11:44.838 Dataset Management Command: Supported 00:11:44.838 Write Zeroes Command: Supported 00:11:44.838 Set Features Save Field: Supported 00:11:44.838 Reservations: Not Supported 00:11:44.838 Timestamp: Supported 00:11:44.838 Copy: Supported 00:11:44.838 Volatile Write Cache: Present 00:11:44.838 Atomic Write Unit (Normal): 1 00:11:44.838 Atomic Write Unit (PFail): 1 00:11:44.838 Atomic Compare & Write Unit: 1 00:11:44.838 Fused Compare & Write: Not Supported 00:11:44.838 Scatter-Gather List 00:11:44.838 SGL Command Set: Supported 00:11:44.838 SGL Keyed: Not Supported 00:11:44.838 SGL Bit Bucket Descriptor: Not Supported 00:11:44.838 SGL Metadata Pointer: Not Supported 00:11:44.838 Oversized SGL: Not Supported 00:11:44.838 SGL Metadata Address: Not Supported 00:11:44.838 SGL Offset: Not Supported 00:11:44.838 Transport SGL Data Block: Not Supported 00:11:44.838 Replay Protected Memory Block: Not Supported 00:11:44.838 00:11:44.838 Firmware Slot Information 00:11:44.838 ========================= 00:11:44.838 Active slot: 1 00:11:44.838 Slot 1 Firmware Revision: 1.0 00:11:44.838 00:11:44.838 00:11:44.838 Commands Supported and Effects 00:11:44.838 ============================== 00:11:44.838 Admin Commands 00:11:44.838 -------------- 00:11:44.838 Delete I/O Submission Queue (00h): Supported 00:11:44.838 Create I/O Submission Queue (01h): Supported 00:11:44.838 Get Log Page (02h): Supported 00:11:44.838 Delete I/O Completion Queue (04h): Supported 00:11:44.838 Create I/O Completion Queue (05h): Supported 00:11:44.838 Identify (06h): Supported 00:11:44.838 Abort (08h): Supported 00:11:44.838 Set Features (09h): Supported 00:11:44.838 Get Features (0Ah): Supported 00:11:44.838 Asynchronous Event Request (0Ch): Supported 00:11:44.838 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:44.838 Directive Send (19h): Supported 00:11:44.838 Directive Receive (1Ah): Supported 00:11:44.838 Virtualization Management (1Ch): Supported 00:11:44.838 Doorbell Buffer Config (7Ch): Supported 00:11:44.838 Format NVM (80h): Supported LBA-Change 00:11:44.838 I/O Commands 00:11:44.838 ------------ 00:11:44.838 Flush (00h): Supported LBA-Change 00:11:44.838 Write (01h): Supported LBA-Change 00:11:44.838 Read (02h): Supported 00:11:44.838 Compare (05h): Supported 00:11:44.838 Write Zeroes (08h): Supported LBA-Change 00:11:44.838 Dataset Management (09h): Supported LBA-Change 00:11:44.838 Unknown (0Ch): Supported 00:11:44.838 Unknown (12h): Supported 00:11:44.838 Copy (19h): Supported LBA-Change 00:11:44.838 Unknown (1Dh): Supported LBA-Change 00:11:44.838 00:11:44.838 Error Log 00:11:44.838 ========= 00:11:44.838 00:11:44.838 Arbitration 00:11:44.838 =========== 00:11:44.838 Arbitration Burst: no limit 00:11:44.838 00:11:44.838 Power Management 00:11:44.838 ================ 00:11:44.838 Number of Power States: 1 00:11:44.838 Current Power State: Power State #0 00:11:44.838 Power State #0: 00:11:44.838 Max Power: 25.00 W 00:11:44.838 Non-Operational State: Operational 00:11:44.838 Entry Latency: 16 microseconds 00:11:44.838 Exit Latency: 4 microseconds 00:11:44.838 Relative Read Throughput: 0 00:11:44.838 Relative Read Latency: 0 00:11:44.838 Relative Write Throughput: 0 00:11:44.838 Relative Write Latency: 0 00:11:44.838 Idle Power: Not Reported 00:11:44.838 Active Power: Not Reported 00:11:44.838 Non-Operational Permissive Mode: Not Supported 00:11:44.838 00:11:44.838 Health Information 00:11:44.838 ================== 00:11:44.838 Critical Warnings: 00:11:44.838 Available Spare Space: OK 00:11:44.838 Temperature: OK 00:11:44.838 Device Reliability: OK 00:11:44.838 Read Only: No 00:11:44.838 Volatile Memory Backup: OK 00:11:44.838 Current Temperature: 323 Kelvin (50 Celsius) 00:11:44.838 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:44.838 Available Spare: 0% 00:11:44.838 Available Spare Threshold: 0% 00:11:44.838 Life Percentage Used: 0% 00:11:44.838 Data Units Read: 4375 00:11:44.838 Data Units Written: 2027 00:11:44.838 Host Read Commands: 169475 00:11:44.838 Host Write Commands: 83164 00:11:44.838 Controller Busy Time: 0 minutes 00:11:44.838 Power Cycles: 0 00:11:44.838 Power On Hours: 0 hours 00:11:44.838 Unsafe Shutdowns: 0 00:11:44.838 Unrecoverable Media Errors: 0 00:11:44.838 Lifetime Error Log Entries: 0 00:11:44.838 Warning Temperature Time: 0 minutes 00:11:44.838 Critical Temperature Time: 0 minutes 00:11:44.838 00:11:44.838 Number of Queues 00:11:44.838 ================ 00:11:44.838 Number of I/O Submission Queues: 64 00:11:44.838 Number of I/O Completion Queues: 64 00:11:44.838 00:11:44.838 ZNS Specific Controller Data 00:11:44.838 ============================ 00:11:44.838 Zone Append Size Limit: 0 00:11:44.838 00:11:44.838 00:11:44.838 Active Namespaces 00:11:44.838 ================= 00:11:44.838 Namespace ID:1 00:11:44.839 Error Recovery Timeout: Unlimited 00:11:44.839 Command Set Identifier: NVM (00h) 00:11:44.839 Deallocate: Supported 00:11:44.839 Deallocated/Unwritten Error: Supported 00:11:44.839 Deallocated Read Value: All 0x00 00:11:44.839 Deallocate in Write Zeroes: Not Supported 00:11:44.839 Deallocated Guard Field: 0xFFFF 00:11:44.839 Flush: Supported 00:11:44.839 Reservation: Not Supported 00:11:44.839 Namespace Sharing Capabilities: Private 00:11:44.839 Size (in LBAs): 1048576 (4GiB) 00:11:44.839 Capacity (in LBAs): 1048576 (4GiB) 00:11:44.839 Utilization (in LBAs): 1048576 (4GiB) 00:11:44.839 Thin Provisioning: Not Supported 00:11:44.839 Per-NS Atomic Units: No 00:11:44.839 Maximum Single Source Range Length: 128 00:11:44.839 Maximum Copy Length: 128 00:11:44.839 Maximum Source Range Count: 128 00:11:44.839 NGUID/EUI64 Never Reused: No 00:11:44.839 Namespace Write Protected: No 00:11:44.839 Number of LBA Formats: 8 00:11:44.839 Current LBA Format: LBA Format #04 00:11:44.839 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.839 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.839 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.839 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.839 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.839 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.839 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:44.839 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.839 00:11:44.839 Namespace ID:2 00:11:44.839 Error Recovery Timeout: Unlimited 00:11:44.839 Command Set Identifier: NVM (00h) 00:11:44.839 Deallocate: Supported 00:11:44.839 Deallocated/Unwritten Error: Supported 00:11:44.839 Deallocated Read Value: All 0x00 00:11:44.839 Deallocate in Write Zeroes: Not Supported 00:11:44.839 Deallocated Guard Field: 0xFFFF 00:11:44.839 Flush: Supported 00:11:44.839 Reservation: Not Supported 00:11:44.839 Namespace Sharing Capabilities: Private 00:11:44.839 Size (in LBAs): 1048576 (4GiB) 00:11:44.839 Capacity (in LBAs): 1048576 (4GiB) 00:11:44.839 Utilization (in LBAs): 1048576 (4GiB) 00:11:44.839 Thin Provisioning: Not Supported 00:11:44.839 Per-NS Atomic Units: No 00:11:44.839 Maximum Single Source Range Length: 128 00:11:44.839 Maximum Copy Length: 128 00:11:44.839 Maximum Source Range Count: 128 00:11:44.839 NGUID/EUI64 Never Reused: No 00:11:44.839 Namespace Write Protected: No 00:11:44.839 Number of LBA Formats: 8 00:11:44.839 Current LBA Format: LBA Format #04 00:11:44.839 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.839 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.839 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.839 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.839 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.839 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.839 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:44.839 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.839 00:11:44.839 Namespace ID:3 00:11:44.839 Error Recovery Timeout: Unlimited 00:11:44.839 Command Set Identifier: NVM (00h) 00:11:44.839 Deallocate: Supported 00:11:44.839 Deallocated/Unwritten Error: Supported 00:11:44.839 Deallocated Read Value: All 0x00 00:11:44.839 Deallocate in Write Zeroes: Not Supported 00:11:44.839 Deallocated Guard Field: 0xFFFF 00:11:44.839 Flush: Supported 00:11:44.839 Reservation: Not Supported 00:11:44.839 Namespace Sharing Capabilities: Private 00:11:44.839 Size (in LBAs): 1048576 (4GiB) 00:11:44.839 Capacity (in LBAs): 1048576 (4GiB) 00:11:44.839 Utilization (in LBAs): 1048576 (4GiB) 00:11:44.839 Thin Provisioning: Not Supported 00:11:44.839 Per-NS Atomic Units: No 00:11:44.839 Maximum Single Source Range Length: 128 00:11:44.839 Maximum Copy Length: 128 00:11:44.839 Maximum Source Range Count: 128 00:11:44.839 NGUID/EUI64 Never Reused: No 00:11:44.839 Namespace Write Protected: No 00:11:44.839 Number of LBA Formats: 8 00:11:44.839 Current LBA Format: LBA Format #04 00:11:44.839 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:44.839 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:44.839 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:44.839 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:44.839 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:44.839 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:44.839 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:44.839 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:44.839 00:11:44.839 23:19:36 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:44.839 23:19:36 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:45.099 ===================================================== 00:11:45.099 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:45.099 ===================================================== 00:11:45.099 Controller Capabilities/Features 00:11:45.099 ================================ 00:11:45.099 Vendor ID: 1b36 00:11:45.099 Subsystem Vendor ID: 1af4 00:11:45.099 Serial Number: 12340 00:11:45.099 Model Number: QEMU NVMe Ctrl 00:11:45.099 Firmware Version: 8.0.0 00:11:45.099 Recommended Arb Burst: 6 00:11:45.099 IEEE OUI Identifier: 00 54 52 00:11:45.099 Multi-path I/O 00:11:45.099 May have multiple subsystem ports: No 00:11:45.099 May have multiple controllers: No 00:11:45.099 Associated with SR-IOV VF: No 00:11:45.099 Max Data Transfer Size: 524288 00:11:45.099 Max Number of Namespaces: 256 00:11:45.099 Max Number of I/O Queues: 64 00:11:45.099 NVMe Specification Version (VS): 1.4 00:11:45.099 NVMe Specification Version (Identify): 1.4 00:11:45.099 Maximum Queue Entries: 2048 00:11:45.099 Contiguous Queues Required: Yes 00:11:45.099 Arbitration Mechanisms Supported 00:11:45.099 Weighted Round Robin: Not Supported 00:11:45.099 Vendor Specific: Not Supported 00:11:45.099 Reset Timeout: 7500 ms 00:11:45.099 Doorbell Stride: 4 bytes 00:11:45.099 NVM Subsystem Reset: Not Supported 00:11:45.099 Command Sets Supported 00:11:45.099 NVM Command Set: Supported 00:11:45.099 Boot Partition: Not Supported 00:11:45.099 Memory Page Size Minimum: 4096 bytes 00:11:45.099 Memory Page Size Maximum: 65536 bytes 00:11:45.099 Persistent Memory Region: Not Supported 00:11:45.099 Optional Asynchronous Events Supported 00:11:45.099 Namespace Attribute Notices: Supported 00:11:45.099 Firmware Activation Notices: Not Supported 00:11:45.099 ANA Change Notices: Not Supported 00:11:45.099 PLE Aggregate Log Change Notices: Not Supported 00:11:45.099 LBA Status Info Alert Notices: Not Supported 00:11:45.099 EGE Aggregate Log Change Notices: Not Supported 00:11:45.099 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.099 Zone Descriptor Change Notices: Not Supported 00:11:45.099 Discovery Log Change Notices: Not Supported 00:11:45.099 Controller Attributes 00:11:45.099 128-bit Host Identifier: Not Supported 00:11:45.099 Non-Operational Permissive Mode: Not Supported 00:11:45.099 NVM Sets: Not Supported 00:11:45.099 Read Recovery Levels: Not Supported 00:11:45.099 Endurance Groups: Not Supported 00:11:45.099 Predictable Latency Mode: Not Supported 00:11:45.099 Traffic Based Keep ALive: Not Supported 00:11:45.099 Namespace Granularity: Not Supported 00:11:45.099 SQ Associations: Not Supported 00:11:45.099 UUID List: Not Supported 00:11:45.099 Multi-Domain Subsystem: Not Supported 00:11:45.099 Fixed Capacity Management: Not Supported 00:11:45.099 Variable Capacity Management: Not Supported 00:11:45.099 Delete Endurance Group: Not Supported 00:11:45.099 Delete NVM Set: Not Supported 00:11:45.099 Extended LBA Formats Supported: Supported 00:11:45.099 Flexible Data Placement Supported: Not Supported 00:11:45.099 00:11:45.099 Controller Memory Buffer Support 00:11:45.099 ================================ 00:11:45.099 Supported: No 00:11:45.099 00:11:45.099 Persistent Memory Region Support 00:11:45.099 ================================ 00:11:45.099 Supported: No 00:11:45.099 00:11:45.099 Admin Command Set Attributes 00:11:45.099 ============================ 00:11:45.099 Security Send/Receive: Not Supported 00:11:45.099 Format NVM: Supported 00:11:45.099 Firmware Activate/Download: Not Supported 00:11:45.099 Namespace Management: Supported 00:11:45.099 Device Self-Test: Not Supported 00:11:45.099 Directives: Supported 00:11:45.099 NVMe-MI: Not Supported 00:11:45.099 Virtualization Management: Not Supported 00:11:45.099 Doorbell Buffer Config: Supported 00:11:45.099 Get LBA Status Capability: Not Supported 00:11:45.099 Command & Feature Lockdown Capability: Not Supported 00:11:45.099 Abort Command Limit: 4 00:11:45.099 Async Event Request Limit: 4 00:11:45.099 Number of Firmware Slots: N/A 00:11:45.099 Firmware Slot 1 Read-Only: N/A 00:11:45.099 Firmware Activation Without Reset: N/A 00:11:45.099 Multiple Update Detection Support: N/A 00:11:45.099 Firmware Update Granularity: No Information Provided 00:11:45.099 Per-Namespace SMART Log: Yes 00:11:45.099 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.099 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:45.099 Command Effects Log Page: Supported 00:11:45.099 Get Log Page Extended Data: Supported 00:11:45.099 Telemetry Log Pages: Not Supported 00:11:45.099 Persistent Event Log Pages: Not Supported 00:11:45.099 Supported Log Pages Log Page: May Support 00:11:45.099 Commands Supported & Effects Log Page: Not Supported 00:11:45.099 Feature Identifiers & Effects Log Page:May Support 00:11:45.099 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.099 Data Area 4 for Telemetry Log: Not Supported 00:11:45.100 Error Log Page Entries Supported: 1 00:11:45.100 Keep Alive: Not Supported 00:11:45.100 00:11:45.100 NVM Command Set Attributes 00:11:45.100 ========================== 00:11:45.100 Submission Queue Entry Size 00:11:45.100 Max: 64 00:11:45.100 Min: 64 00:11:45.100 Completion Queue Entry Size 00:11:45.100 Max: 16 00:11:45.100 Min: 16 00:11:45.100 Number of Namespaces: 256 00:11:45.100 Compare Command: Supported 00:11:45.100 Write Uncorrectable Command: Not Supported 00:11:45.100 Dataset Management Command: Supported 00:11:45.100 Write Zeroes Command: Supported 00:11:45.100 Set Features Save Field: Supported 00:11:45.100 Reservations: Not Supported 00:11:45.100 Timestamp: Supported 00:11:45.100 Copy: Supported 00:11:45.100 Volatile Write Cache: Present 00:11:45.100 Atomic Write Unit (Normal): 1 00:11:45.100 Atomic Write Unit (PFail): 1 00:11:45.100 Atomic Compare & Write Unit: 1 00:11:45.100 Fused Compare & Write: Not Supported 00:11:45.100 Scatter-Gather List 00:11:45.100 SGL Command Set: Supported 00:11:45.100 SGL Keyed: Not Supported 00:11:45.100 SGL Bit Bucket Descriptor: Not Supported 00:11:45.100 SGL Metadata Pointer: Not Supported 00:11:45.100 Oversized SGL: Not Supported 00:11:45.100 SGL Metadata Address: Not Supported 00:11:45.100 SGL Offset: Not Supported 00:11:45.100 Transport SGL Data Block: Not Supported 00:11:45.100 Replay Protected Memory Block: Not Supported 00:11:45.100 00:11:45.100 Firmware Slot Information 00:11:45.100 ========================= 00:11:45.100 Active slot: 1 00:11:45.100 Slot 1 Firmware Revision: 1.0 00:11:45.100 00:11:45.100 00:11:45.100 Commands Supported and Effects 00:11:45.100 ============================== 00:11:45.100 Admin Commands 00:11:45.100 -------------- 00:11:45.100 Delete I/O Submission Queue (00h): Supported 00:11:45.100 Create I/O Submission Queue (01h): Supported 00:11:45.100 Get Log Page (02h): Supported 00:11:45.100 Delete I/O Completion Queue (04h): Supported 00:11:45.100 Create I/O Completion Queue (05h): Supported 00:11:45.100 Identify (06h): Supported 00:11:45.100 Abort (08h): Supported 00:11:45.100 Set Features (09h): Supported 00:11:45.100 Get Features (0Ah): Supported 00:11:45.100 Asynchronous Event Request (0Ch): Supported 00:11:45.100 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.100 Directive Send (19h): Supported 00:11:45.100 Directive Receive (1Ah): Supported 00:11:45.100 Virtualization Management (1Ch): Supported 00:11:45.100 Doorbell Buffer Config (7Ch): Supported 00:11:45.100 Format NVM (80h): Supported LBA-Change 00:11:45.100 I/O Commands 00:11:45.100 ------------ 00:11:45.100 Flush (00h): Supported LBA-Change 00:11:45.100 Write (01h): Supported LBA-Change 00:11:45.100 Read (02h): Supported 00:11:45.100 Compare (05h): Supported 00:11:45.100 Write Zeroes (08h): Supported LBA-Change 00:11:45.100 Dataset Management (09h): Supported LBA-Change 00:11:45.100 Unknown (0Ch): Supported 00:11:45.100 Unknown (12h): Supported 00:11:45.100 Copy (19h): Supported LBA-Change 00:11:45.100 Unknown (1Dh): Supported LBA-Change 00:11:45.100 00:11:45.100 Error Log 00:11:45.100 ========= 00:11:45.100 00:11:45.100 Arbitration 00:11:45.100 =========== 00:11:45.100 Arbitration Burst: no limit 00:11:45.100 00:11:45.100 Power Management 00:11:45.100 ================ 00:11:45.100 Number of Power States: 1 00:11:45.100 Current Power State: Power State #0 00:11:45.100 Power State #0: 00:11:45.100 Max Power: 25.00 W 00:11:45.100 Non-Operational State: Operational 00:11:45.100 Entry Latency: 16 microseconds 00:11:45.100 Exit Latency: 4 microseconds 00:11:45.100 Relative Read Throughput: 0 00:11:45.100 Relative Read Latency: 0 00:11:45.100 Relative Write Throughput: 0 00:11:45.100 Relative Write Latency: 0 00:11:45.100 Idle Power: Not Reported 00:11:45.100 Active Power: Not Reported 00:11:45.100 Non-Operational Permissive Mode: Not Supported 00:11:45.100 00:11:45.100 Health Information 00:11:45.100 ================== 00:11:45.100 Critical Warnings: 00:11:45.100 Available Spare Space: OK 00:11:45.100 Temperature: OK 00:11:45.100 Device Reliability: OK 00:11:45.100 Read Only: No 00:11:45.100 Volatile Memory Backup: OK 00:11:45.100 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.100 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.100 Available Spare: 0% 00:11:45.100 Available Spare Threshold: 0% 00:11:45.100 Life Percentage Used: 0% 00:11:45.100 Data Units Read: 2029 00:11:45.100 Data Units Written: 939 00:11:45.100 Host Read Commands: 77799 00:11:45.100 Host Write Commands: 38601 00:11:45.100 Controller Busy Time: 0 minutes 00:11:45.100 Power Cycles: 0 00:11:45.100 Power On Hours: 0 hours 00:11:45.100 Unsafe Shutdowns: 0 00:11:45.100 Unrecoverable Media Errors: 0 00:11:45.100 Lifetime Error Log Entries: 0 00:11:45.100 Warning Temperature Time: 0 minutes 00:11:45.100 Critical Temperature Time: 0 minutes 00:11:45.100 00:11:45.100 Number of Queues 00:11:45.100 ================ 00:11:45.100 Number of I/O Submission Queues: 64 00:11:45.100 Number of I/O Completion Queues: 64 00:11:45.100 00:11:45.100 ZNS Specific Controller Data 00:11:45.100 ============================ 00:11:45.100 Zone Append Size Limit: 0 00:11:45.100 00:11:45.100 00:11:45.100 Active Namespaces 00:11:45.100 ================= 00:11:45.100 Namespace ID:1 00:11:45.100 Error Recovery Timeout: Unlimited 00:11:45.100 Command Set Identifier: NVM (00h) 00:11:45.100 Deallocate: Supported 00:11:45.100 Deallocated/Unwritten Error: Supported 00:11:45.100 Deallocated Read Value: All 0x00 00:11:45.100 Deallocate in Write Zeroes: Not Supported 00:11:45.100 Deallocated Guard Field: 0xFFFF 00:11:45.100 Flush: Supported 00:11:45.100 Reservation: Not Supported 00:11:45.100 Metadata Transferred as: Separate Metadata Buffer 00:11:45.100 Namespace Sharing Capabilities: Private 00:11:45.100 Size (in LBAs): 1548666 (5GiB) 00:11:45.100 Capacity (in LBAs): 1548666 (5GiB) 00:11:45.100 Utilization (in LBAs): 1548666 (5GiB) 00:11:45.100 Thin Provisioning: Not Supported 00:11:45.100 Per-NS Atomic Units: No 00:11:45.100 Maximum Single Source Range Length: 128 00:11:45.100 Maximum Copy Length: 128 00:11:45.100 Maximum Source Range Count: 128 00:11:45.100 NGUID/EUI64 Never Reused: No 00:11:45.100 Namespace Write Protected: No 00:11:45.100 Number of LBA Formats: 8 00:11:45.100 Current LBA Format: LBA Format #07 00:11:45.100 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.100 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.100 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.100 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.100 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.100 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.100 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.100 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.100 00:11:45.100 23:19:36 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:45.100 23:19:36 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:11:45.360 ===================================================== 00:11:45.360 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:45.360 ===================================================== 00:11:45.360 Controller Capabilities/Features 00:11:45.360 ================================ 00:11:45.360 Vendor ID: 1b36 00:11:45.360 Subsystem Vendor ID: 1af4 00:11:45.360 Serial Number: 12341 00:11:45.360 Model Number: QEMU NVMe Ctrl 00:11:45.360 Firmware Version: 8.0.0 00:11:45.360 Recommended Arb Burst: 6 00:11:45.360 IEEE OUI Identifier: 00 54 52 00:11:45.360 Multi-path I/O 00:11:45.360 May have multiple subsystem ports: No 00:11:45.360 May have multiple controllers: No 00:11:45.360 Associated with SR-IOV VF: No 00:11:45.360 Max Data Transfer Size: 524288 00:11:45.360 Max Number of Namespaces: 256 00:11:45.360 Max Number of I/O Queues: 64 00:11:45.360 NVMe Specification Version (VS): 1.4 00:11:45.360 NVMe Specification Version (Identify): 1.4 00:11:45.360 Maximum Queue Entries: 2048 00:11:45.360 Contiguous Queues Required: Yes 00:11:45.360 Arbitration Mechanisms Supported 00:11:45.360 Weighted Round Robin: Not Supported 00:11:45.360 Vendor Specific: Not Supported 00:11:45.360 Reset Timeout: 7500 ms 00:11:45.360 Doorbell Stride: 4 bytes 00:11:45.360 NVM Subsystem Reset: Not Supported 00:11:45.360 Command Sets Supported 00:11:45.360 NVM Command Set: Supported 00:11:45.360 Boot Partition: Not Supported 00:11:45.360 Memory Page Size Minimum: 4096 bytes 00:11:45.360 Memory Page Size Maximum: 65536 bytes 00:11:45.360 Persistent Memory Region: Not Supported 00:11:45.360 Optional Asynchronous Events Supported 00:11:45.360 Namespace Attribute Notices: Supported 00:11:45.360 Firmware Activation Notices: Not Supported 00:11:45.360 ANA Change Notices: Not Supported 00:11:45.360 PLE Aggregate Log Change Notices: Not Supported 00:11:45.360 LBA Status Info Alert Notices: Not Supported 00:11:45.360 EGE Aggregate Log Change Notices: Not Supported 00:11:45.360 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.360 Zone Descriptor Change Notices: Not Supported 00:11:45.360 Discovery Log Change Notices: Not Supported 00:11:45.360 Controller Attributes 00:11:45.360 128-bit Host Identifier: Not Supported 00:11:45.360 Non-Operational Permissive Mode: Not Supported 00:11:45.360 NVM Sets: Not Supported 00:11:45.360 Read Recovery Levels: Not Supported 00:11:45.360 Endurance Groups: Not Supported 00:11:45.360 Predictable Latency Mode: Not Supported 00:11:45.360 Traffic Based Keep ALive: Not Supported 00:11:45.360 Namespace Granularity: Not Supported 00:11:45.360 SQ Associations: Not Supported 00:11:45.360 UUID List: Not Supported 00:11:45.360 Multi-Domain Subsystem: Not Supported 00:11:45.360 Fixed Capacity Management: Not Supported 00:11:45.360 Variable Capacity Management: Not Supported 00:11:45.360 Delete Endurance Group: Not Supported 00:11:45.360 Delete NVM Set: Not Supported 00:11:45.360 Extended LBA Formats Supported: Supported 00:11:45.360 Flexible Data Placement Supported: Not Supported 00:11:45.360 00:11:45.360 Controller Memory Buffer Support 00:11:45.360 ================================ 00:11:45.360 Supported: No 00:11:45.360 00:11:45.360 Persistent Memory Region Support 00:11:45.360 ================================ 00:11:45.360 Supported: No 00:11:45.360 00:11:45.360 Admin Command Set Attributes 00:11:45.360 ============================ 00:11:45.360 Security Send/Receive: Not Supported 00:11:45.360 Format NVM: Supported 00:11:45.360 Firmware Activate/Download: Not Supported 00:11:45.360 Namespace Management: Supported 00:11:45.360 Device Self-Test: Not Supported 00:11:45.360 Directives: Supported 00:11:45.360 NVMe-MI: Not Supported 00:11:45.360 Virtualization Management: Not Supported 00:11:45.360 Doorbell Buffer Config: Supported 00:11:45.360 Get LBA Status Capability: Not Supported 00:11:45.360 Command & Feature Lockdown Capability: Not Supported 00:11:45.360 Abort Command Limit: 4 00:11:45.360 Async Event Request Limit: 4 00:11:45.360 Number of Firmware Slots: N/A 00:11:45.360 Firmware Slot 1 Read-Only: N/A 00:11:45.361 Firmware Activation Without Reset: N/A 00:11:45.361 Multiple Update Detection Support: N/A 00:11:45.361 Firmware Update Granularity: No Information Provided 00:11:45.361 Per-Namespace SMART Log: Yes 00:11:45.361 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.361 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:45.361 Command Effects Log Page: Supported 00:11:45.361 Get Log Page Extended Data: Supported 00:11:45.361 Telemetry Log Pages: Not Supported 00:11:45.361 Persistent Event Log Pages: Not Supported 00:11:45.361 Supported Log Pages Log Page: May Support 00:11:45.361 Commands Supported & Effects Log Page: Not Supported 00:11:45.361 Feature Identifiers & Effects Log Page:May Support 00:11:45.361 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.361 Data Area 4 for Telemetry Log: Not Supported 00:11:45.361 Error Log Page Entries Supported: 1 00:11:45.361 Keep Alive: Not Supported 00:11:45.361 00:11:45.361 NVM Command Set Attributes 00:11:45.361 ========================== 00:11:45.361 Submission Queue Entry Size 00:11:45.361 Max: 64 00:11:45.361 Min: 64 00:11:45.361 Completion Queue Entry Size 00:11:45.361 Max: 16 00:11:45.361 Min: 16 00:11:45.361 Number of Namespaces: 256 00:11:45.361 Compare Command: Supported 00:11:45.361 Write Uncorrectable Command: Not Supported 00:11:45.361 Dataset Management Command: Supported 00:11:45.361 Write Zeroes Command: Supported 00:11:45.361 Set Features Save Field: Supported 00:11:45.361 Reservations: Not Supported 00:11:45.361 Timestamp: Supported 00:11:45.361 Copy: Supported 00:11:45.361 Volatile Write Cache: Present 00:11:45.361 Atomic Write Unit (Normal): 1 00:11:45.361 Atomic Write Unit (PFail): 1 00:11:45.361 Atomic Compare & Write Unit: 1 00:11:45.361 Fused Compare & Write: Not Supported 00:11:45.361 Scatter-Gather List 00:11:45.361 SGL Command Set: Supported 00:11:45.361 SGL Keyed: Not Supported 00:11:45.361 SGL Bit Bucket Descriptor: Not Supported 00:11:45.361 SGL Metadata Pointer: Not Supported 00:11:45.361 Oversized SGL: Not Supported 00:11:45.361 SGL Metadata Address: Not Supported 00:11:45.361 SGL Offset: Not Supported 00:11:45.361 Transport SGL Data Block: Not Supported 00:11:45.361 Replay Protected Memory Block: Not Supported 00:11:45.361 00:11:45.361 Firmware Slot Information 00:11:45.361 ========================= 00:11:45.361 Active slot: 1 00:11:45.361 Slot 1 Firmware Revision: 1.0 00:11:45.361 00:11:45.361 00:11:45.361 Commands Supported and Effects 00:11:45.361 ============================== 00:11:45.361 Admin Commands 00:11:45.361 -------------- 00:11:45.361 Delete I/O Submission Queue (00h): Supported 00:11:45.361 Create I/O Submission Queue (01h): Supported 00:11:45.361 Get Log Page (02h): Supported 00:11:45.361 Delete I/O Completion Queue (04h): Supported 00:11:45.361 Create I/O Completion Queue (05h): Supported 00:11:45.361 Identify (06h): Supported 00:11:45.361 Abort (08h): Supported 00:11:45.361 Set Features (09h): Supported 00:11:45.361 Get Features (0Ah): Supported 00:11:45.361 Asynchronous Event Request (0Ch): Supported 00:11:45.361 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.361 Directive Send (19h): Supported 00:11:45.361 Directive Receive (1Ah): Supported 00:11:45.361 Virtualization Management (1Ch): Supported 00:11:45.361 Doorbell Buffer Config (7Ch): Supported 00:11:45.361 Format NVM (80h): Supported LBA-Change 00:11:45.361 I/O Commands 00:11:45.361 ------------ 00:11:45.361 Flush (00h): Supported LBA-Change 00:11:45.361 Write (01h): Supported LBA-Change 00:11:45.361 Read (02h): Supported 00:11:45.361 Compare (05h): Supported 00:11:45.361 Write Zeroes (08h): Supported LBA-Change 00:11:45.361 Dataset Management (09h): Supported LBA-Change 00:11:45.361 Unknown (0Ch): Supported 00:11:45.361 Unknown (12h): Supported 00:11:45.361 Copy (19h): Supported LBA-Change 00:11:45.361 Unknown (1Dh): Supported LBA-Change 00:11:45.361 00:11:45.361 Error Log 00:11:45.361 ========= 00:11:45.361 00:11:45.361 Arbitration 00:11:45.361 =========== 00:11:45.361 Arbitration Burst: no limit 00:11:45.361 00:11:45.361 Power Management 00:11:45.361 ================ 00:11:45.361 Number of Power States: 1 00:11:45.361 Current Power State: Power State #0 00:11:45.361 Power State #0: 00:11:45.361 Max Power: 25.00 W 00:11:45.361 Non-Operational State: Operational 00:11:45.361 Entry Latency: 16 microseconds 00:11:45.361 Exit Latency: 4 microseconds 00:11:45.361 Relative Read Throughput: 0 00:11:45.361 Relative Read Latency: 0 00:11:45.361 Relative Write Throughput: 0 00:11:45.361 Relative Write Latency: 0 00:11:45.361 Idle Power: Not Reported 00:11:45.361 Active Power: Not Reported 00:11:45.361 Non-Operational Permissive Mode: Not Supported 00:11:45.361 00:11:45.361 Health Information 00:11:45.361 ================== 00:11:45.361 Critical Warnings: 00:11:45.361 Available Spare Space: OK 00:11:45.361 Temperature: OK 00:11:45.361 Device Reliability: OK 00:11:45.361 Read Only: No 00:11:45.361 Volatile Memory Backup: OK 00:11:45.361 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.361 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.361 Available Spare: 0% 00:11:45.361 Available Spare Threshold: 0% 00:11:45.361 Life Percentage Used: 0% 00:11:45.361 Data Units Read: 1379 00:11:45.361 Data Units Written: 644 00:11:45.361 Host Read Commands: 55731 00:11:45.361 Host Write Commands: 27420 00:11:45.361 Controller Busy Time: 0 minutes 00:11:45.361 Power Cycles: 0 00:11:45.361 Power On Hours: 0 hours 00:11:45.361 Unsafe Shutdowns: 0 00:11:45.361 Unrecoverable Media Errors: 0 00:11:45.361 Lifetime Error Log Entries: 0 00:11:45.361 Warning Temperature Time: 0 minutes 00:11:45.361 Critical Temperature Time: 0 minutes 00:11:45.361 00:11:45.361 Number of Queues 00:11:45.361 ================ 00:11:45.361 Number of I/O Submission Queues: 64 00:11:45.361 Number of I/O Completion Queues: 64 00:11:45.361 00:11:45.361 ZNS Specific Controller Data 00:11:45.361 ============================ 00:11:45.361 Zone Append Size Limit: 0 00:11:45.361 00:11:45.361 00:11:45.361 Active Namespaces 00:11:45.361 ================= 00:11:45.361 Namespace ID:1 00:11:45.361 Error Recovery Timeout: Unlimited 00:11:45.361 Command Set Identifier: NVM (00h) 00:11:45.361 Deallocate: Supported 00:11:45.361 Deallocated/Unwritten Error: Supported 00:11:45.361 Deallocated Read Value: All 0x00 00:11:45.361 Deallocate in Write Zeroes: Not Supported 00:11:45.361 Deallocated Guard Field: 0xFFFF 00:11:45.361 Flush: Supported 00:11:45.361 Reservation: Not Supported 00:11:45.361 Namespace Sharing Capabilities: Private 00:11:45.361 Size (in LBAs): 1310720 (5GiB) 00:11:45.361 Capacity (in LBAs): 1310720 (5GiB) 00:11:45.361 Utilization (in LBAs): 1310720 (5GiB) 00:11:45.361 Thin Provisioning: Not Supported 00:11:45.361 Per-NS Atomic Units: No 00:11:45.361 Maximum Single Source Range Length: 128 00:11:45.361 Maximum Copy Length: 128 00:11:45.361 Maximum Source Range Count: 128 00:11:45.361 NGUID/EUI64 Never Reused: No 00:11:45.361 Namespace Write Protected: No 00:11:45.361 Number of LBA Formats: 8 00:11:45.361 Current LBA Format: LBA Format #04 00:11:45.361 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.361 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.361 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.361 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.361 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.361 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.361 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.361 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.361 00:11:45.361 23:19:37 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:45.361 23:19:37 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:11:45.621 ===================================================== 00:11:45.621 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:45.621 ===================================================== 00:11:45.621 Controller Capabilities/Features 00:11:45.621 ================================ 00:11:45.621 Vendor ID: 1b36 00:11:45.621 Subsystem Vendor ID: 1af4 00:11:45.621 Serial Number: 12342 00:11:45.621 Model Number: QEMU NVMe Ctrl 00:11:45.621 Firmware Version: 8.0.0 00:11:45.622 Recommended Arb Burst: 6 00:11:45.622 IEEE OUI Identifier: 00 54 52 00:11:45.622 Multi-path I/O 00:11:45.622 May have multiple subsystem ports: No 00:11:45.622 May have multiple controllers: No 00:11:45.622 Associated with SR-IOV VF: No 00:11:45.622 Max Data Transfer Size: 524288 00:11:45.622 Max Number of Namespaces: 256 00:11:45.622 Max Number of I/O Queues: 64 00:11:45.622 NVMe Specification Version (VS): 1.4 00:11:45.622 NVMe Specification Version (Identify): 1.4 00:11:45.622 Maximum Queue Entries: 2048 00:11:45.622 Contiguous Queues Required: Yes 00:11:45.622 Arbitration Mechanisms Supported 00:11:45.622 Weighted Round Robin: Not Supported 00:11:45.622 Vendor Specific: Not Supported 00:11:45.622 Reset Timeout: 7500 ms 00:11:45.622 Doorbell Stride: 4 bytes 00:11:45.622 NVM Subsystem Reset: Not Supported 00:11:45.622 Command Sets Supported 00:11:45.622 NVM Command Set: Supported 00:11:45.622 Boot Partition: Not Supported 00:11:45.622 Memory Page Size Minimum: 4096 bytes 00:11:45.622 Memory Page Size Maximum: 65536 bytes 00:11:45.622 Persistent Memory Region: Not Supported 00:11:45.622 Optional Asynchronous Events Supported 00:11:45.622 Namespace Attribute Notices: Supported 00:11:45.622 Firmware Activation Notices: Not Supported 00:11:45.622 ANA Change Notices: Not Supported 00:11:45.622 PLE Aggregate Log Change Notices: Not Supported 00:11:45.622 LBA Status Info Alert Notices: Not Supported 00:11:45.622 EGE Aggregate Log Change Notices: Not Supported 00:11:45.622 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.622 Zone Descriptor Change Notices: Not Supported 00:11:45.622 Discovery Log Change Notices: Not Supported 00:11:45.622 Controller Attributes 00:11:45.622 128-bit Host Identifier: Not Supported 00:11:45.622 Non-Operational Permissive Mode: Not Supported 00:11:45.622 NVM Sets: Not Supported 00:11:45.622 Read Recovery Levels: Not Supported 00:11:45.622 Endurance Groups: Not Supported 00:11:45.622 Predictable Latency Mode: Not Supported 00:11:45.622 Traffic Based Keep ALive: Not Supported 00:11:45.622 Namespace Granularity: Not Supported 00:11:45.622 SQ Associations: Not Supported 00:11:45.622 UUID List: Not Supported 00:11:45.622 Multi-Domain Subsystem: Not Supported 00:11:45.622 Fixed Capacity Management: Not Supported 00:11:45.622 Variable Capacity Management: Not Supported 00:11:45.622 Delete Endurance Group: Not Supported 00:11:45.622 Delete NVM Set: Not Supported 00:11:45.622 Extended LBA Formats Supported: Supported 00:11:45.622 Flexible Data Placement Supported: Not Supported 00:11:45.622 00:11:45.622 Controller Memory Buffer Support 00:11:45.622 ================================ 00:11:45.622 Supported: No 00:11:45.622 00:11:45.622 Persistent Memory Region Support 00:11:45.622 ================================ 00:11:45.622 Supported: No 00:11:45.622 00:11:45.622 Admin Command Set Attributes 00:11:45.622 ============================ 00:11:45.622 Security Send/Receive: Not Supported 00:11:45.622 Format NVM: Supported 00:11:45.622 Firmware Activate/Download: Not Supported 00:11:45.622 Namespace Management: Supported 00:11:45.622 Device Self-Test: Not Supported 00:11:45.622 Directives: Supported 00:11:45.622 NVMe-MI: Not Supported 00:11:45.622 Virtualization Management: Not Supported 00:11:45.622 Doorbell Buffer Config: Supported 00:11:45.622 Get LBA Status Capability: Not Supported 00:11:45.622 Command & Feature Lockdown Capability: Not Supported 00:11:45.622 Abort Command Limit: 4 00:11:45.622 Async Event Request Limit: 4 00:11:45.622 Number of Firmware Slots: N/A 00:11:45.622 Firmware Slot 1 Read-Only: N/A 00:11:45.622 Firmware Activation Without Reset: N/A 00:11:45.622 Multiple Update Detection Support: N/A 00:11:45.622 Firmware Update Granularity: No Information Provided 00:11:45.622 Per-Namespace SMART Log: Yes 00:11:45.622 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.622 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:45.622 Command Effects Log Page: Supported 00:11:45.622 Get Log Page Extended Data: Supported 00:11:45.622 Telemetry Log Pages: Not Supported 00:11:45.622 Persistent Event Log Pages: Not Supported 00:11:45.622 Supported Log Pages Log Page: May Support 00:11:45.622 Commands Supported & Effects Log Page: Not Supported 00:11:45.622 Feature Identifiers & Effects Log Page:May Support 00:11:45.622 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.622 Data Area 4 for Telemetry Log: Not Supported 00:11:45.622 Error Log Page Entries Supported: 1 00:11:45.622 Keep Alive: Not Supported 00:11:45.622 00:11:45.622 NVM Command Set Attributes 00:11:45.622 ========================== 00:11:45.622 Submission Queue Entry Size 00:11:45.622 Max: 64 00:11:45.622 Min: 64 00:11:45.622 Completion Queue Entry Size 00:11:45.622 Max: 16 00:11:45.622 Min: 16 00:11:45.622 Number of Namespaces: 256 00:11:45.622 Compare Command: Supported 00:11:45.622 Write Uncorrectable Command: Not Supported 00:11:45.622 Dataset Management Command: Supported 00:11:45.622 Write Zeroes Command: Supported 00:11:45.622 Set Features Save Field: Supported 00:11:45.622 Reservations: Not Supported 00:11:45.622 Timestamp: Supported 00:11:45.622 Copy: Supported 00:11:45.622 Volatile Write Cache: Present 00:11:45.622 Atomic Write Unit (Normal): 1 00:11:45.622 Atomic Write Unit (PFail): 1 00:11:45.622 Atomic Compare & Write Unit: 1 00:11:45.622 Fused Compare & Write: Not Supported 00:11:45.622 Scatter-Gather List 00:11:45.622 SGL Command Set: Supported 00:11:45.622 SGL Keyed: Not Supported 00:11:45.622 SGL Bit Bucket Descriptor: Not Supported 00:11:45.622 SGL Metadata Pointer: Not Supported 00:11:45.622 Oversized SGL: Not Supported 00:11:45.622 SGL Metadata Address: Not Supported 00:11:45.622 SGL Offset: Not Supported 00:11:45.622 Transport SGL Data Block: Not Supported 00:11:45.622 Replay Protected Memory Block: Not Supported 00:11:45.622 00:11:45.622 Firmware Slot Information 00:11:45.622 ========================= 00:11:45.622 Active slot: 1 00:11:45.622 Slot 1 Firmware Revision: 1.0 00:11:45.622 00:11:45.622 00:11:45.622 Commands Supported and Effects 00:11:45.622 ============================== 00:11:45.622 Admin Commands 00:11:45.622 -------------- 00:11:45.622 Delete I/O Submission Queue (00h): Supported 00:11:45.622 Create I/O Submission Queue (01h): Supported 00:11:45.622 Get Log Page (02h): Supported 00:11:45.622 Delete I/O Completion Queue (04h): Supported 00:11:45.622 Create I/O Completion Queue (05h): Supported 00:11:45.622 Identify (06h): Supported 00:11:45.622 Abort (08h): Supported 00:11:45.622 Set Features (09h): Supported 00:11:45.622 Get Features (0Ah): Supported 00:11:45.622 Asynchronous Event Request (0Ch): Supported 00:11:45.622 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.622 Directive Send (19h): Supported 00:11:45.622 Directive Receive (1Ah): Supported 00:11:45.622 Virtualization Management (1Ch): Supported 00:11:45.622 Doorbell Buffer Config (7Ch): Supported 00:11:45.622 Format NVM (80h): Supported LBA-Change 00:11:45.622 I/O Commands 00:11:45.622 ------------ 00:11:45.622 Flush (00h): Supported LBA-Change 00:11:45.622 Write (01h): Supported LBA-Change 00:11:45.622 Read (02h): Supported 00:11:45.622 Compare (05h): Supported 00:11:45.622 Write Zeroes (08h): Supported LBA-Change 00:11:45.622 Dataset Management (09h): Supported LBA-Change 00:11:45.622 Unknown (0Ch): Supported 00:11:45.622 Unknown (12h): Supported 00:11:45.622 Copy (19h): Supported LBA-Change 00:11:45.622 Unknown (1Dh): Supported LBA-Change 00:11:45.622 00:11:45.622 Error Log 00:11:45.622 ========= 00:11:45.622 00:11:45.622 Arbitration 00:11:45.622 =========== 00:11:45.622 Arbitration Burst: no limit 00:11:45.622 00:11:45.622 Power Management 00:11:45.622 ================ 00:11:45.623 Number of Power States: 1 00:11:45.623 Current Power State: Power State #0 00:11:45.623 Power State #0: 00:11:45.623 Max Power: 25.00 W 00:11:45.623 Non-Operational State: Operational 00:11:45.623 Entry Latency: 16 microseconds 00:11:45.623 Exit Latency: 4 microseconds 00:11:45.623 Relative Read Throughput: 0 00:11:45.623 Relative Read Latency: 0 00:11:45.623 Relative Write Throughput: 0 00:11:45.623 Relative Write Latency: 0 00:11:45.623 Idle Power: Not Reported 00:11:45.623 Active Power: Not Reported 00:11:45.623 Non-Operational Permissive Mode: Not Supported 00:11:45.623 00:11:45.623 Health Information 00:11:45.623 ================== 00:11:45.623 Critical Warnings: 00:11:45.623 Available Spare Space: OK 00:11:45.623 Temperature: OK 00:11:45.623 Device Reliability: OK 00:11:45.623 Read Only: No 00:11:45.623 Volatile Memory Backup: OK 00:11:45.623 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.623 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.623 Available Spare: 0% 00:11:45.623 Available Spare Threshold: 0% 00:11:45.623 Life Percentage Used: 0% 00:11:45.623 Data Units Read: 4375 00:11:45.623 Data Units Written: 2027 00:11:45.623 Host Read Commands: 169475 00:11:45.623 Host Write Commands: 83164 00:11:45.623 Controller Busy Time: 0 minutes 00:11:45.623 Power Cycles: 0 00:11:45.623 Power On Hours: 0 hours 00:11:45.623 Unsafe Shutdowns: 0 00:11:45.623 Unrecoverable Media Errors: 0 00:11:45.623 Lifetime Error Log Entries: 0 00:11:45.623 Warning Temperature Time: 0 minutes 00:11:45.623 Critical Temperature Time: 0 minutes 00:11:45.623 00:11:45.623 Number of Queues 00:11:45.623 ================ 00:11:45.623 Number of I/O Submission Queues: 64 00:11:45.623 Number of I/O Completion Queues: 64 00:11:45.623 00:11:45.623 ZNS Specific Controller Data 00:11:45.623 ============================ 00:11:45.623 Zone Append Size Limit: 0 00:11:45.623 00:11:45.623 00:11:45.623 Active Namespaces 00:11:45.623 ================= 00:11:45.623 Namespace ID:1 00:11:45.623 Error Recovery Timeout: Unlimited 00:11:45.623 Command Set Identifier: NVM (00h) 00:11:45.623 Deallocate: Supported 00:11:45.623 Deallocated/Unwritten Error: Supported 00:11:45.623 Deallocated Read Value: All 0x00 00:11:45.623 Deallocate in Write Zeroes: Not Supported 00:11:45.623 Deallocated Guard Field: 0xFFFF 00:11:45.623 Flush: Supported 00:11:45.623 Reservation: Not Supported 00:11:45.623 Namespace Sharing Capabilities: Private 00:11:45.623 Size (in LBAs): 1048576 (4GiB) 00:11:45.623 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.623 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.623 Thin Provisioning: Not Supported 00:11:45.623 Per-NS Atomic Units: No 00:11:45.623 Maximum Single Source Range Length: 128 00:11:45.623 Maximum Copy Length: 128 00:11:45.623 Maximum Source Range Count: 128 00:11:45.623 NGUID/EUI64 Never Reused: No 00:11:45.623 Namespace Write Protected: No 00:11:45.623 Number of LBA Formats: 8 00:11:45.623 Current LBA Format: LBA Format #04 00:11:45.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.623 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.623 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.623 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.623 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.623 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.623 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.623 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.623 00:11:45.623 Namespace ID:2 00:11:45.623 Error Recovery Timeout: Unlimited 00:11:45.623 Command Set Identifier: NVM (00h) 00:11:45.623 Deallocate: Supported 00:11:45.623 Deallocated/Unwritten Error: Supported 00:11:45.623 Deallocated Read Value: All 0x00 00:11:45.623 Deallocate in Write Zeroes: Not Supported 00:11:45.623 Deallocated Guard Field: 0xFFFF 00:11:45.623 Flush: Supported 00:11:45.623 Reservation: Not Supported 00:11:45.623 Namespace Sharing Capabilities: Private 00:11:45.623 Size (in LBAs): 1048576 (4GiB) 00:11:45.623 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.623 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.623 Thin Provisioning: Not Supported 00:11:45.623 Per-NS Atomic Units: No 00:11:45.623 Maximum Single Source Range Length: 128 00:11:45.623 Maximum Copy Length: 128 00:11:45.623 Maximum Source Range Count: 128 00:11:45.623 NGUID/EUI64 Never Reused: No 00:11:45.623 Namespace Write Protected: No 00:11:45.623 Number of LBA Formats: 8 00:11:45.623 Current LBA Format: LBA Format #04 00:11:45.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.623 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.623 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.623 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.623 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.623 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.623 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.623 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.623 00:11:45.623 Namespace ID:3 00:11:45.623 Error Recovery Timeout: Unlimited 00:11:45.623 Command Set Identifier: NVM (00h) 00:11:45.623 Deallocate: Supported 00:11:45.623 Deallocated/Unwritten Error: Supported 00:11:45.623 Deallocated Read Value: All 0x00 00:11:45.623 Deallocate in Write Zeroes: Not Supported 00:11:45.623 Deallocated Guard Field: 0xFFFF 00:11:45.623 Flush: Supported 00:11:45.623 Reservation: Not Supported 00:11:45.623 Namespace Sharing Capabilities: Private 00:11:45.623 Size (in LBAs): 1048576 (4GiB) 00:11:45.623 Capacity (in LBAs): 1048576 (4GiB) 00:11:45.623 Utilization (in LBAs): 1048576 (4GiB) 00:11:45.623 Thin Provisioning: Not Supported 00:11:45.623 Per-NS Atomic Units: No 00:11:45.623 Maximum Single Source Range Length: 128 00:11:45.623 Maximum Copy Length: 128 00:11:45.623 Maximum Source Range Count: 128 00:11:45.623 NGUID/EUI64 Never Reused: No 00:11:45.623 Namespace Write Protected: No 00:11:45.623 Number of LBA Formats: 8 00:11:45.623 Current LBA Format: LBA Format #04 00:11:45.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.623 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.623 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.623 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.623 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.623 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.623 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.623 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.623 00:11:45.623 23:19:37 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:45.623 23:19:37 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:11:45.882 ===================================================== 00:11:45.882 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:45.882 ===================================================== 00:11:45.882 Controller Capabilities/Features 00:11:45.882 ================================ 00:11:45.882 Vendor ID: 1b36 00:11:45.882 Subsystem Vendor ID: 1af4 00:11:45.882 Serial Number: 12343 00:11:45.882 Model Number: QEMU NVMe Ctrl 00:11:45.882 Firmware Version: 8.0.0 00:11:45.883 Recommended Arb Burst: 6 00:11:45.883 IEEE OUI Identifier: 00 54 52 00:11:45.883 Multi-path I/O 00:11:45.883 May have multiple subsystem ports: No 00:11:45.883 May have multiple controllers: Yes 00:11:45.883 Associated with SR-IOV VF: No 00:11:45.883 Max Data Transfer Size: 524288 00:11:45.883 Max Number of Namespaces: 256 00:11:45.883 Max Number of I/O Queues: 64 00:11:45.883 NVMe Specification Version (VS): 1.4 00:11:45.883 NVMe Specification Version (Identify): 1.4 00:11:45.883 Maximum Queue Entries: 2048 00:11:45.883 Contiguous Queues Required: Yes 00:11:45.883 Arbitration Mechanisms Supported 00:11:45.883 Weighted Round Robin: Not Supported 00:11:45.883 Vendor Specific: Not Supported 00:11:45.883 Reset Timeout: 7500 ms 00:11:45.883 Doorbell Stride: 4 bytes 00:11:45.883 NVM Subsystem Reset: Not Supported 00:11:45.883 Command Sets Supported 00:11:45.883 NVM Command Set: Supported 00:11:45.883 Boot Partition: Not Supported 00:11:45.883 Memory Page Size Minimum: 4096 bytes 00:11:45.883 Memory Page Size Maximum: 65536 bytes 00:11:45.883 Persistent Memory Region: Not Supported 00:11:45.883 Optional Asynchronous Events Supported 00:11:45.883 Namespace Attribute Notices: Supported 00:11:45.883 Firmware Activation Notices: Not Supported 00:11:45.883 ANA Change Notices: Not Supported 00:11:45.883 PLE Aggregate Log Change Notices: Not Supported 00:11:45.883 LBA Status Info Alert Notices: Not Supported 00:11:45.883 EGE Aggregate Log Change Notices: Not Supported 00:11:45.883 Normal NVM Subsystem Shutdown event: Not Supported 00:11:45.883 Zone Descriptor Change Notices: Not Supported 00:11:45.883 Discovery Log Change Notices: Not Supported 00:11:45.883 Controller Attributes 00:11:45.883 128-bit Host Identifier: Not Supported 00:11:45.883 Non-Operational Permissive Mode: Not Supported 00:11:45.883 NVM Sets: Not Supported 00:11:45.883 Read Recovery Levels: Not Supported 00:11:45.883 Endurance Groups: Supported 00:11:45.883 Predictable Latency Mode: Not Supported 00:11:45.883 Traffic Based Keep ALive: Not Supported 00:11:45.883 Namespace Granularity: Not Supported 00:11:45.883 SQ Associations: Not Supported 00:11:45.883 UUID List: Not Supported 00:11:45.883 Multi-Domain Subsystem: Not Supported 00:11:45.883 Fixed Capacity Management: Not Supported 00:11:45.883 Variable Capacity Management: Not Supported 00:11:45.883 Delete Endurance Group: Not Supported 00:11:45.883 Delete NVM Set: Not Supported 00:11:45.883 Extended LBA Formats Supported: Supported 00:11:45.883 Flexible Data Placement Supported: Supported 00:11:45.883 00:11:45.883 Controller Memory Buffer Support 00:11:45.883 ================================ 00:11:45.883 Supported: No 00:11:45.883 00:11:45.883 Persistent Memory Region Support 00:11:45.883 ================================ 00:11:45.883 Supported: No 00:11:45.883 00:11:45.883 Admin Command Set Attributes 00:11:45.883 ============================ 00:11:45.883 Security Send/Receive: Not Supported 00:11:45.883 Format NVM: Supported 00:11:45.883 Firmware Activate/Download: Not Supported 00:11:45.883 Namespace Management: Supported 00:11:45.883 Device Self-Test: Not Supported 00:11:45.883 Directives: Supported 00:11:45.883 NVMe-MI: Not Supported 00:11:45.883 Virtualization Management: Not Supported 00:11:45.883 Doorbell Buffer Config: Supported 00:11:45.883 Get LBA Status Capability: Not Supported 00:11:45.883 Command & Feature Lockdown Capability: Not Supported 00:11:45.883 Abort Command Limit: 4 00:11:45.883 Async Event Request Limit: 4 00:11:45.883 Number of Firmware Slots: N/A 00:11:45.883 Firmware Slot 1 Read-Only: N/A 00:11:45.883 Firmware Activation Without Reset: N/A 00:11:45.883 Multiple Update Detection Support: N/A 00:11:45.883 Firmware Update Granularity: No Information Provided 00:11:45.883 Per-Namespace SMART Log: Yes 00:11:45.883 Asymmetric Namespace Access Log Page: Not Supported 00:11:45.883 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:45.883 Command Effects Log Page: Supported 00:11:45.883 Get Log Page Extended Data: Supported 00:11:45.883 Telemetry Log Pages: Not Supported 00:11:45.883 Persistent Event Log Pages: Not Supported 00:11:45.883 Supported Log Pages Log Page: May Support 00:11:45.883 Commands Supported & Effects Log Page: Not Supported 00:11:45.883 Feature Identifiers & Effects Log Page:May Support 00:11:45.883 NVMe-MI Commands & Effects Log Page: May Support 00:11:45.883 Data Area 4 for Telemetry Log: Not Supported 00:11:45.883 Error Log Page Entries Supported: 1 00:11:45.883 Keep Alive: Not Supported 00:11:45.883 00:11:45.883 NVM Command Set Attributes 00:11:45.883 ========================== 00:11:45.883 Submission Queue Entry Size 00:11:45.883 Max: 64 00:11:45.883 Min: 64 00:11:45.883 Completion Queue Entry Size 00:11:45.883 Max: 16 00:11:45.883 Min: 16 00:11:45.883 Number of Namespaces: 256 00:11:45.883 Compare Command: Supported 00:11:45.883 Write Uncorrectable Command: Not Supported 00:11:45.883 Dataset Management Command: Supported 00:11:45.883 Write Zeroes Command: Supported 00:11:45.883 Set Features Save Field: Supported 00:11:45.883 Reservations: Not Supported 00:11:45.883 Timestamp: Supported 00:11:45.883 Copy: Supported 00:11:45.883 Volatile Write Cache: Present 00:11:45.883 Atomic Write Unit (Normal): 1 00:11:45.883 Atomic Write Unit (PFail): 1 00:11:45.883 Atomic Compare & Write Unit: 1 00:11:45.883 Fused Compare & Write: Not Supported 00:11:45.883 Scatter-Gather List 00:11:45.883 SGL Command Set: Supported 00:11:45.883 SGL Keyed: Not Supported 00:11:45.883 SGL Bit Bucket Descriptor: Not Supported 00:11:45.883 SGL Metadata Pointer: Not Supported 00:11:45.883 Oversized SGL: Not Supported 00:11:45.883 SGL Metadata Address: Not Supported 00:11:45.883 SGL Offset: Not Supported 00:11:45.883 Transport SGL Data Block: Not Supported 00:11:45.883 Replay Protected Memory Block: Not Supported 00:11:45.883 00:11:45.883 Firmware Slot Information 00:11:45.883 ========================= 00:11:45.883 Active slot: 1 00:11:45.883 Slot 1 Firmware Revision: 1.0 00:11:45.883 00:11:45.883 00:11:45.883 Commands Supported and Effects 00:11:45.883 ============================== 00:11:45.883 Admin Commands 00:11:45.883 -------------- 00:11:45.883 Delete I/O Submission Queue (00h): Supported 00:11:45.883 Create I/O Submission Queue (01h): Supported 00:11:45.883 Get Log Page (02h): Supported 00:11:45.883 Delete I/O Completion Queue (04h): Supported 00:11:45.883 Create I/O Completion Queue (05h): Supported 00:11:45.883 Identify (06h): Supported 00:11:45.883 Abort (08h): Supported 00:11:45.884 Set Features (09h): Supported 00:11:45.884 Get Features (0Ah): Supported 00:11:45.884 Asynchronous Event Request (0Ch): Supported 00:11:45.884 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:45.884 Directive Send (19h): Supported 00:11:45.884 Directive Receive (1Ah): Supported 00:11:45.884 Virtualization Management (1Ch): Supported 00:11:45.884 Doorbell Buffer Config (7Ch): Supported 00:11:45.884 Format NVM (80h): Supported LBA-Change 00:11:45.884 I/O Commands 00:11:45.884 ------------ 00:11:45.884 Flush (00h): Supported LBA-Change 00:11:45.884 Write (01h): Supported LBA-Change 00:11:45.884 Read (02h): Supported 00:11:45.884 Compare (05h): Supported 00:11:45.884 Write Zeroes (08h): Supported LBA-Change 00:11:45.884 Dataset Management (09h): Supported LBA-Change 00:11:45.884 Unknown (0Ch): Supported 00:11:45.884 Unknown (12h): Supported 00:11:45.884 Copy (19h): Supported LBA-Change 00:11:45.884 Unknown (1Dh): Supported LBA-Change 00:11:45.884 00:11:45.884 Error Log 00:11:45.884 ========= 00:11:45.884 00:11:45.884 Arbitration 00:11:45.884 =========== 00:11:45.884 Arbitration Burst: no limit 00:11:45.884 00:11:45.884 Power Management 00:11:45.884 ================ 00:11:45.884 Number of Power States: 1 00:11:45.884 Current Power State: Power State #0 00:11:45.884 Power State #0: 00:11:45.884 Max Power: 25.00 W 00:11:45.884 Non-Operational State: Operational 00:11:45.884 Entry Latency: 16 microseconds 00:11:45.884 Exit Latency: 4 microseconds 00:11:45.884 Relative Read Throughput: 0 00:11:45.884 Relative Read Latency: 0 00:11:45.884 Relative Write Throughput: 0 00:11:45.884 Relative Write Latency: 0 00:11:45.884 Idle Power: Not Reported 00:11:45.884 Active Power: Not Reported 00:11:45.884 Non-Operational Permissive Mode: Not Supported 00:11:45.884 00:11:45.884 Health Information 00:11:45.884 ================== 00:11:45.884 Critical Warnings: 00:11:45.884 Available Spare Space: OK 00:11:45.884 Temperature: OK 00:11:45.884 Device Reliability: OK 00:11:45.884 Read Only: No 00:11:45.884 Volatile Memory Backup: OK 00:11:45.884 Current Temperature: 323 Kelvin (50 Celsius) 00:11:45.884 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:45.884 Available Spare: 0% 00:11:45.884 Available Spare Threshold: 0% 00:11:45.884 Life Percentage Used: 0% 00:11:45.884 Data Units Read: 1584 00:11:45.884 Data Units Written: 742 00:11:45.884 Host Read Commands: 57455 00:11:45.884 Host Write Commands: 28250 00:11:45.884 Controller Busy Time: 0 minutes 00:11:45.884 Power Cycles: 0 00:11:45.884 Power On Hours: 0 hours 00:11:45.884 Unsafe Shutdowns: 0 00:11:45.884 Unrecoverable Media Errors: 0 00:11:45.884 Lifetime Error Log Entries: 0 00:11:45.884 Warning Temperature Time: 0 minutes 00:11:45.884 Critical Temperature Time: 0 minutes 00:11:45.884 00:11:45.884 Number of Queues 00:11:45.884 ================ 00:11:45.884 Number of I/O Submission Queues: 64 00:11:45.884 Number of I/O Completion Queues: 64 00:11:45.884 00:11:45.884 ZNS Specific Controller Data 00:11:45.884 ============================ 00:11:45.884 Zone Append Size Limit: 0 00:11:45.884 00:11:45.884 00:11:45.884 Active Namespaces 00:11:45.884 ================= 00:11:45.884 Namespace ID:1 00:11:45.884 Error Recovery Timeout: Unlimited 00:11:45.884 Command Set Identifier: NVM (00h) 00:11:45.884 Deallocate: Supported 00:11:45.884 Deallocated/Unwritten Error: Supported 00:11:45.884 Deallocated Read Value: All 0x00 00:11:45.884 Deallocate in Write Zeroes: Not Supported 00:11:45.884 Deallocated Guard Field: 0xFFFF 00:11:45.884 Flush: Supported 00:11:45.884 Reservation: Not Supported 00:11:45.884 Namespace Sharing Capabilities: Multiple Controllers 00:11:45.884 Size (in LBAs): 262144 (1GiB) 00:11:45.884 Capacity (in LBAs): 262144 (1GiB) 00:11:45.884 Utilization (in LBAs): 262144 (1GiB) 00:11:45.884 Thin Provisioning: Not Supported 00:11:45.884 Per-NS Atomic Units: No 00:11:45.884 Maximum Single Source Range Length: 128 00:11:45.884 Maximum Copy Length: 128 00:11:45.884 Maximum Source Range Count: 128 00:11:45.884 NGUID/EUI64 Never Reused: No 00:11:45.884 Namespace Write Protected: No 00:11:45.884 Endurance group ID: 1 00:11:45.884 Number of LBA Formats: 8 00:11:45.884 Current LBA Format: LBA Format #04 00:11:45.884 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:45.884 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:45.884 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:45.884 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:45.884 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:45.884 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:45.884 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:45.884 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:45.884 00:11:45.884 Get Feature FDP: 00:11:45.884 ================ 00:11:45.884 Enabled: Yes 00:11:45.884 FDP configuration index: 0 00:11:45.884 00:11:45.884 FDP configurations log page 00:11:45.884 =========================== 00:11:45.884 Number of FDP configurations: 1 00:11:45.884 Version: 0 00:11:45.884 Size: 112 00:11:45.884 FDP Configuration Descriptor: 0 00:11:45.884 Descriptor Size: 96 00:11:45.884 Reclaim Group Identifier format: 2 00:11:45.884 FDP Volatile Write Cache: Not Present 00:11:45.884 FDP Configuration: Valid 00:11:45.884 Vendor Specific Size: 0 00:11:45.884 Number of Reclaim Groups: 2 00:11:45.884 Number of Recalim Unit Handles: 8 00:11:45.884 Max Placement Identifiers: 128 00:11:45.884 Number of Namespaces Suppprted: 256 00:11:45.884 Reclaim unit Nominal Size: 6000000 bytes 00:11:45.884 Estimated Reclaim Unit Time Limit: Not Reported 00:11:45.884 RUH Desc #000: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #001: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #002: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #003: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #004: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #005: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #006: RUH Type: Initially Isolated 00:11:45.884 RUH Desc #007: RUH Type: Initially Isolated 00:11:45.884 00:11:45.884 FDP reclaim unit handle usage log page 00:11:46.143 ====================================== 00:11:46.143 Number of Reclaim Unit Handles: 8 00:11:46.143 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:46.143 RUH Usage Desc #001: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #002: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #003: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #004: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #005: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #006: RUH Attributes: Unused 00:11:46.143 RUH Usage Desc #007: RUH Attributes: Unused 00:11:46.143 00:11:46.143 FDP statistics log page 00:11:46.143 ======================= 00:11:46.143 Host bytes with metadata written: 493535232 00:11:46.143 Media bytes with metadata written: 493678592 00:11:46.143 Media bytes erased: 0 00:11:46.143 00:11:46.143 FDP events log page 00:11:46.143 =================== 00:11:46.143 Number of FDP events: 0 00:11:46.143 00:11:46.143 00:11:46.143 real 0m1.621s 00:11:46.143 user 0m0.567s 00:11:46.143 sys 0m0.823s 00:11:46.143 ************************************ 00:11:46.143 END TEST nvme_identify 00:11:46.143 ************************************ 00:11:46.143 23:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.144 23:19:37 -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 23:19:37 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:46.144 23:19:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:46.144 23:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.144 23:19:37 -- common/autotest_common.sh@10 -- # set +x 00:11:46.144 ************************************ 00:11:46.144 START TEST nvme_perf 00:11:46.144 ************************************ 00:11:46.144 23:19:37 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:46.144 23:19:37 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:47.521 Initializing NVMe Controllers 00:11:47.521 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:47.521 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:47.522 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:47.522 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:47.522 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:47.522 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:47.522 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:47.522 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:47.522 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:47.522 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:47.522 Initialization complete. Launching workers. 00:11:47.522 ======================================================== 00:11:47.522 Latency(us) 00:11:47.522 Device Information : IOPS MiB/s Average min max 00:11:47.522 PCIE (0000:00:06.0) NSID 1 from core 0: 14061.21 164.78 9098.48 6666.32 39560.07 00:11:47.522 PCIE (0000:00:07.0) NSID 1 from core 0: 14061.21 164.78 9088.91 6647.37 38241.88 00:11:47.522 PCIE (0000:00:09.0) NSID 1 from core 0: 14061.21 164.78 9075.38 6847.79 38938.34 00:11:47.522 PCIE (0000:00:08.0) NSID 1 from core 0: 14061.21 164.78 9062.41 6922.20 37879.42 00:11:47.522 PCIE (0000:00:08.0) NSID 2 from core 0: 14061.21 164.78 9049.13 6628.93 36738.34 00:11:47.522 PCIE (0000:00:08.0) NSID 3 from core 0: 14061.21 164.78 9036.07 6879.87 35386.47 00:11:47.522 ======================================================== 00:11:47.522 Total : 84367.29 988.68 9068.40 6628.93 39560.07 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7211.592us 00:11:47.522 10.00000% : 7843.264us 00:11:47.522 25.00000% : 8211.740us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9317.166us 00:11:47.522 90.00000% : 9685.642us 00:11:47.522 95.00000% : 9948.839us 00:11:47.522 98.00000% : 15160.135us 00:11:47.522 99.00000% : 18318.496us 00:11:47.522 99.50000% : 37058.108us 00:11:47.522 99.90000% : 39163.682us 00:11:47.522 99.99000% : 39584.797us 00:11:47.522 99.99900% : 39584.797us 00:11:47.522 99.99990% : 39584.797us 00:11:47.522 99.99999% : 39584.797us 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7264.231us 00:11:47.522 10.00000% : 8001.182us 00:11:47.522 25.00000% : 8264.379us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9264.527us 00:11:47.522 90.00000% : 9580.363us 00:11:47.522 95.00000% : 10001.478us 00:11:47.522 98.00000% : 14423.184us 00:11:47.522 99.00000% : 16634.037us 00:11:47.522 99.50000% : 35794.763us 00:11:47.522 99.90000% : 37900.337us 00:11:47.522 99.99000% : 38321.452us 00:11:47.522 99.99900% : 38321.452us 00:11:47.522 99.99990% : 38321.452us 00:11:47.522 99.99999% : 38321.452us 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7316.871us 00:11:47.522 10.00000% : 8001.182us 00:11:47.522 25.00000% : 8264.379us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9264.527us 00:11:47.522 90.00000% : 9580.363us 00:11:47.522 95.00000% : 9948.839us 00:11:47.522 98.00000% : 13475.676us 00:11:47.522 99.00000% : 16002.365us 00:11:47.522 99.50000% : 36426.435us 00:11:47.522 99.90000% : 38532.010us 00:11:47.522 99.99000% : 38953.124us 00:11:47.522 99.99900% : 38953.124us 00:11:47.522 99.99990% : 38953.124us 00:11:47.522 99.99999% : 38953.124us 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7369.510us 00:11:47.522 10.00000% : 8001.182us 00:11:47.522 25.00000% : 8264.379us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9264.527us 00:11:47.522 90.00000% : 9580.363us 00:11:47.522 95.00000% : 9896.199us 00:11:47.522 98.00000% : 13317.757us 00:11:47.522 99.00000% : 15054.856us 00:11:47.522 99.50000% : 35163.091us 00:11:47.522 99.90000% : 37479.222us 00:11:47.522 99.99000% : 37900.337us 00:11:47.522 99.99900% : 37900.337us 00:11:47.522 99.99990% : 37900.337us 00:11:47.522 99.99999% : 37900.337us 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7316.871us 00:11:47.522 10.00000% : 8001.182us 00:11:47.522 25.00000% : 8264.379us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9211.888us 00:11:47.522 90.00000% : 9580.363us 00:11:47.522 95.00000% : 10001.478us 00:11:47.522 98.00000% : 13212.479us 00:11:47.522 99.00000% : 15686.529us 00:11:47.522 99.50000% : 34110.304us 00:11:47.522 99.90000% : 36215.878us 00:11:47.522 99.99000% : 36847.550us 00:11:47.522 99.99900% : 36847.550us 00:11:47.522 99.99990% : 36847.550us 00:11:47.522 99.99999% : 36847.550us 00:11:47.522 00:11:47.522 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:47.522 ================================================================================= 00:11:47.522 1.00000% : 7316.871us 00:11:47.522 10.00000% : 8001.182us 00:11:47.522 25.00000% : 8264.379us 00:11:47.522 50.00000% : 8738.133us 00:11:47.522 75.00000% : 9211.888us 00:11:47.522 90.00000% : 9527.724us 00:11:47.522 95.00000% : 9948.839us 00:11:47.522 98.00000% : 12896.643us 00:11:47.522 99.00000% : 16844.594us 00:11:47.522 99.50000% : 32636.402us 00:11:47.522 99.90000% : 34952.533us 00:11:47.522 99.99000% : 35373.648us 00:11:47.522 99.99900% : 35584.206us 00:11:47.522 99.99990% : 35584.206us 00:11:47.522 99.99999% : 35584.206us 00:11:47.522 00:11:47.522 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:47.522 ============================================================================== 00:11:47.522 Range in us Cumulative IO count 00:11:47.522 6658.879 - 6685.198: 0.0213% ( 3) 00:11:47.522 6685.198 - 6711.518: 0.0497% ( 4) 00:11:47.522 6711.518 - 6737.838: 0.0710% ( 3) 00:11:47.522 6737.838 - 6790.477: 0.1278% ( 8) 00:11:47.522 6790.477 - 6843.116: 0.2486% ( 17) 00:11:47.522 6843.116 - 6895.756: 0.3267% ( 11) 00:11:47.522 6895.756 - 6948.395: 0.4403% ( 16) 00:11:47.522 6948.395 - 7001.035: 0.5895% ( 21) 00:11:47.522 7001.035 - 7053.674: 0.7031% ( 16) 00:11:47.522 7053.674 - 7106.313: 0.8310% ( 18) 00:11:47.522 7106.313 - 7158.953: 0.9588% ( 18) 00:11:47.522 7158.953 - 7211.592: 1.1009% ( 20) 00:11:47.522 7211.592 - 7264.231: 1.2571% ( 22) 00:11:47.522 7264.231 - 7316.871: 1.3991% ( 20) 00:11:47.522 7316.871 - 7369.510: 1.5696% ( 24) 00:11:47.522 7369.510 - 7422.149: 1.7116% ( 20) 00:11:47.522 7422.149 - 7474.789: 1.8750% ( 23) 00:11:47.522 7474.789 - 7527.428: 2.1378% ( 37) 00:11:47.522 7527.428 - 7580.067: 2.6989% ( 79) 00:11:47.522 7580.067 - 7632.707: 3.5156% ( 115) 00:11:47.522 7632.707 - 7685.346: 4.7088% ( 168) 00:11:47.522 7685.346 - 7737.986: 6.2997% ( 224) 00:11:47.522 7737.986 - 7790.625: 8.2599% ( 276) 00:11:47.522 7790.625 - 7843.264: 10.4972% ( 315) 00:11:47.522 7843.264 - 7895.904: 12.7131% ( 312) 00:11:47.522 7895.904 - 7948.543: 15.0710% ( 332) 00:11:47.522 7948.543 - 8001.182: 17.4645% ( 337) 00:11:47.522 8001.182 - 8053.822: 19.8153% ( 331) 00:11:47.522 8053.822 - 8106.461: 22.2301% ( 340) 00:11:47.522 8106.461 - 8159.100: 24.5739% ( 330) 00:11:47.522 8159.100 - 8211.740: 26.9602% ( 336) 00:11:47.522 8211.740 - 8264.379: 29.3182% ( 332) 00:11:47.522 8264.379 - 8317.018: 31.8040% ( 350) 00:11:47.522 8317.018 - 8369.658: 34.2685% ( 347) 00:11:47.522 8369.658 - 8422.297: 36.5554% ( 322) 00:11:47.522 8422.297 - 8474.937: 38.9631% ( 339) 00:11:47.522 8474.937 - 8527.576: 41.3281% ( 333) 00:11:47.522 8527.576 - 8580.215: 43.6293% ( 324) 00:11:47.522 8580.215 - 8632.855: 46.0298% ( 338) 00:11:47.523 8632.855 - 8685.494: 48.3949% ( 333) 00:11:47.523 8685.494 - 8738.133: 50.6889% ( 323) 00:11:47.523 8738.133 - 8790.773: 53.0043% ( 326) 00:11:47.523 8790.773 - 8843.412: 55.2983% ( 323) 00:11:47.523 8843.412 - 8896.051: 57.5568% ( 318) 00:11:47.523 8896.051 - 8948.691: 59.9574% ( 338) 00:11:47.523 8948.691 - 9001.330: 62.1023% ( 302) 00:11:47.523 9001.330 - 9053.969: 64.3963% ( 323) 00:11:47.523 9053.969 - 9106.609: 66.6690% ( 320) 00:11:47.523 9106.609 - 9159.248: 68.9276% ( 318) 00:11:47.523 9159.248 - 9211.888: 71.1577% ( 314) 00:11:47.523 9211.888 - 9264.527: 73.4943% ( 329) 00:11:47.523 9264.527 - 9317.166: 75.7244% ( 314) 00:11:47.523 9317.166 - 9369.806: 78.0398% ( 326) 00:11:47.523 9369.806 - 9422.445: 80.3409% ( 324) 00:11:47.523 9422.445 - 9475.084: 82.5710% ( 314) 00:11:47.523 9475.084 - 9527.724: 84.8366% ( 319) 00:11:47.523 9527.724 - 9580.363: 86.9957% ( 304) 00:11:47.523 9580.363 - 9633.002: 89.0554% ( 290) 00:11:47.523 9633.002 - 9685.642: 90.7955% ( 245) 00:11:47.523 9685.642 - 9738.281: 92.3935% ( 225) 00:11:47.523 9738.281 - 9790.920: 93.5014% ( 156) 00:11:47.523 9790.920 - 9843.560: 94.2614% ( 107) 00:11:47.523 9843.560 - 9896.199: 94.7017% ( 62) 00:11:47.523 9896.199 - 9948.839: 95.0710% ( 52) 00:11:47.523 9948.839 - 10001.478: 95.3125% ( 34) 00:11:47.523 10001.478 - 10054.117: 95.4332% ( 17) 00:11:47.523 10054.117 - 10106.757: 95.5185% ( 12) 00:11:47.523 10106.757 - 10159.396: 95.6179% ( 14) 00:11:47.523 10159.396 - 10212.035: 95.7031% ( 12) 00:11:47.523 10212.035 - 10264.675: 95.7884% ( 12) 00:11:47.523 10264.675 - 10317.314: 95.8665% ( 11) 00:11:47.523 10317.314 - 10369.953: 95.9304% ( 9) 00:11:47.523 10369.953 - 10422.593: 95.9801% ( 7) 00:11:47.523 10422.593 - 10475.232: 96.0511% ( 10) 00:11:47.523 10475.232 - 10527.871: 96.1364% ( 12) 00:11:47.523 10527.871 - 10580.511: 96.2003% ( 9) 00:11:47.523 10580.511 - 10633.150: 96.2571% ( 8) 00:11:47.523 10633.150 - 10685.790: 96.3068% ( 7) 00:11:47.523 10685.790 - 10738.429: 96.3565% ( 7) 00:11:47.523 10738.429 - 10791.068: 96.3849% ( 4) 00:11:47.523 10791.068 - 10843.708: 96.4347% ( 7) 00:11:47.523 10843.708 - 10896.347: 96.4702% ( 5) 00:11:47.523 10896.347 - 10948.986: 96.4986% ( 4) 00:11:47.523 10948.986 - 11001.626: 96.5199% ( 3) 00:11:47.523 11001.626 - 11054.265: 96.5412% ( 3) 00:11:47.523 11054.265 - 11106.904: 96.5625% ( 3) 00:11:47.523 11106.904 - 11159.544: 96.5909% ( 4) 00:11:47.523 11159.544 - 11212.183: 96.6051% ( 2) 00:11:47.523 11212.183 - 11264.822: 96.6193% ( 2) 00:11:47.523 11264.822 - 11317.462: 96.6548% ( 5) 00:11:47.523 11317.462 - 11370.101: 96.6619% ( 1) 00:11:47.523 11370.101 - 11422.741: 96.6974% ( 5) 00:11:47.523 11422.741 - 11475.380: 96.7116% ( 2) 00:11:47.523 11475.380 - 11528.019: 96.7330% ( 3) 00:11:47.523 11528.019 - 11580.659: 96.7614% ( 4) 00:11:47.523 11580.659 - 11633.298: 96.7756% ( 2) 00:11:47.523 11633.298 - 11685.937: 96.8040% ( 4) 00:11:47.523 11685.937 - 11738.577: 96.8324% ( 4) 00:11:47.523 11738.577 - 11791.216: 96.8537% ( 3) 00:11:47.523 11791.216 - 11843.855: 96.8750% ( 3) 00:11:47.523 11843.855 - 11896.495: 96.8892% ( 2) 00:11:47.523 11896.495 - 11949.134: 96.9318% ( 6) 00:11:47.523 11949.134 - 12001.773: 96.9460% ( 2) 00:11:47.523 12001.773 - 12054.413: 96.9673% ( 3) 00:11:47.523 12054.413 - 12107.052: 96.9886% ( 3) 00:11:47.523 12107.052 - 12159.692: 97.0099% ( 3) 00:11:47.523 12159.692 - 12212.331: 97.0384% ( 4) 00:11:47.523 12212.331 - 12264.970: 97.0597% ( 3) 00:11:47.523 12264.970 - 12317.610: 97.0881% ( 4) 00:11:47.523 12317.610 - 12370.249: 97.1236% ( 5) 00:11:47.523 12370.249 - 12422.888: 97.1591% ( 5) 00:11:47.523 12422.888 - 12475.528: 97.1875% ( 4) 00:11:47.523 12475.528 - 12528.167: 97.2301% ( 6) 00:11:47.523 12528.167 - 12580.806: 97.2585% ( 4) 00:11:47.523 12580.806 - 12633.446: 97.2727% ( 2) 00:11:47.523 12633.446 - 12686.085: 97.3153% ( 6) 00:11:47.523 12686.085 - 12738.724: 97.3295% ( 2) 00:11:47.523 12738.724 - 12791.364: 97.3509% ( 3) 00:11:47.523 12791.364 - 12844.003: 97.3864% ( 5) 00:11:47.523 12844.003 - 12896.643: 97.4077% ( 3) 00:11:47.523 12896.643 - 12949.282: 97.4219% ( 2) 00:11:47.523 12949.282 - 13001.921: 97.4361% ( 2) 00:11:47.523 13001.921 - 13054.561: 97.4503% ( 2) 00:11:47.523 13054.561 - 13107.200: 97.4574% ( 1) 00:11:47.523 13107.200 - 13159.839: 97.4716% ( 2) 00:11:47.523 13159.839 - 13212.479: 97.4787% ( 1) 00:11:47.523 13212.479 - 13265.118: 97.5000% ( 3) 00:11:47.523 13265.118 - 13317.757: 97.5071% ( 1) 00:11:47.523 13317.757 - 13370.397: 97.5213% ( 2) 00:11:47.523 13370.397 - 13423.036: 97.5284% ( 1) 00:11:47.523 13423.036 - 13475.676: 97.5497% ( 3) 00:11:47.523 13475.676 - 13580.954: 97.5710% ( 3) 00:11:47.523 13580.954 - 13686.233: 97.5852% ( 2) 00:11:47.523 13686.233 - 13791.512: 97.6065% ( 3) 00:11:47.523 13791.512 - 13896.790: 97.6278% ( 3) 00:11:47.523 13896.790 - 14002.069: 97.6562% ( 4) 00:11:47.523 14002.069 - 14107.348: 97.6847% ( 4) 00:11:47.523 14107.348 - 14212.627: 97.7060% ( 3) 00:11:47.523 14212.627 - 14317.905: 97.7273% ( 3) 00:11:47.523 14317.905 - 14423.184: 97.7415% ( 2) 00:11:47.523 14423.184 - 14528.463: 97.7770% ( 5) 00:11:47.523 14528.463 - 14633.741: 97.7983% ( 3) 00:11:47.523 14633.741 - 14739.020: 97.8338% ( 5) 00:11:47.523 14739.020 - 14844.299: 97.8764% ( 6) 00:11:47.523 14844.299 - 14949.578: 97.9190% ( 6) 00:11:47.523 14949.578 - 15054.856: 97.9759% ( 8) 00:11:47.523 15054.856 - 15160.135: 98.0185% ( 6) 00:11:47.523 15160.135 - 15265.414: 98.0753% ( 8) 00:11:47.523 15265.414 - 15370.692: 98.1179% ( 6) 00:11:47.523 15370.692 - 15475.971: 98.1534% ( 5) 00:11:47.523 15475.971 - 15581.250: 98.2031% ( 7) 00:11:47.523 15581.250 - 15686.529: 98.2528% ( 7) 00:11:47.523 15686.529 - 15791.807: 98.3026% ( 7) 00:11:47.523 15791.807 - 15897.086: 98.3523% ( 7) 00:11:47.523 15897.086 - 16002.365: 98.4020% ( 7) 00:11:47.523 16002.365 - 16107.643: 98.4446% ( 6) 00:11:47.523 16107.643 - 16212.922: 98.4943% ( 7) 00:11:47.523 16212.922 - 16318.201: 98.5298% ( 5) 00:11:47.523 16318.201 - 16423.480: 98.5724% ( 6) 00:11:47.523 16423.480 - 16528.758: 98.6009% ( 4) 00:11:47.523 16528.758 - 16634.037: 98.6222% ( 3) 00:11:47.523 16634.037 - 16739.316: 98.6506% ( 4) 00:11:47.523 16739.316 - 16844.594: 98.6719% ( 3) 00:11:47.523 16844.594 - 16949.873: 98.7003% ( 4) 00:11:47.523 16949.873 - 17055.152: 98.7216% ( 3) 00:11:47.523 17055.152 - 17160.431: 98.7429% ( 3) 00:11:47.523 17160.431 - 17265.709: 98.7713% ( 4) 00:11:47.523 17265.709 - 17370.988: 98.7855% ( 2) 00:11:47.523 17370.988 - 17476.267: 98.8139% ( 4) 00:11:47.523 17476.267 - 17581.545: 98.8423% ( 4) 00:11:47.523 17581.545 - 17686.824: 98.8707% ( 4) 00:11:47.523 17686.824 - 17792.103: 98.8920% ( 3) 00:11:47.523 17792.103 - 17897.382: 98.9205% ( 4) 00:11:47.523 17897.382 - 18002.660: 98.9418% ( 3) 00:11:47.523 18002.660 - 18107.939: 98.9702% ( 4) 00:11:47.523 18107.939 - 18213.218: 98.9915% ( 3) 00:11:47.523 18213.218 - 18318.496: 99.0199% ( 4) 00:11:47.523 18318.496 - 18423.775: 99.0412% ( 3) 00:11:47.523 18423.775 - 18529.054: 99.0625% ( 3) 00:11:47.523 18529.054 - 18634.333: 99.0909% ( 4) 00:11:47.523 34320.861 - 34531.418: 99.0980% ( 1) 00:11:47.523 34531.418 - 34741.976: 99.1264% ( 4) 00:11:47.523 34741.976 - 34952.533: 99.1619% ( 5) 00:11:47.523 34952.533 - 35163.091: 99.1974% ( 5) 00:11:47.523 35163.091 - 35373.648: 99.2330% ( 5) 00:11:47.523 35373.648 - 35584.206: 99.2685% ( 5) 00:11:47.523 35584.206 - 35794.763: 99.3111% ( 6) 00:11:47.523 35794.763 - 36005.320: 99.3395% ( 4) 00:11:47.523 36005.320 - 36215.878: 99.3821% ( 6) 00:11:47.523 36215.878 - 36426.435: 99.4247% ( 6) 00:11:47.523 36426.435 - 36636.993: 99.4531% ( 4) 00:11:47.523 36636.993 - 36847.550: 99.4886% ( 5) 00:11:47.523 36847.550 - 37058.108: 99.5312% ( 6) 00:11:47.523 37058.108 - 37268.665: 99.5668% ( 5) 00:11:47.523 37268.665 - 37479.222: 99.6023% ( 5) 00:11:47.523 37479.222 - 37689.780: 99.6449% ( 6) 00:11:47.523 37689.780 - 37900.337: 99.6875% ( 6) 00:11:47.523 37900.337 - 38110.895: 99.7230% ( 5) 00:11:47.523 38110.895 - 38321.452: 99.7656% ( 6) 00:11:47.523 38321.452 - 38532.010: 99.8011% ( 5) 00:11:47.523 38532.010 - 38742.567: 99.8438% ( 6) 00:11:47.523 38742.567 - 38953.124: 99.8793% ( 5) 00:11:47.523 38953.124 - 39163.682: 99.9219% ( 6) 00:11:47.523 39163.682 - 39374.239: 99.9645% ( 6) 00:11:47.523 39374.239 - 39584.797: 100.0000% ( 5) 00:11:47.523 00:11:47.523 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:47.523 ============================================================================== 00:11:47.523 Range in us Cumulative IO count 00:11:47.524 6632.559 - 6658.879: 0.0071% ( 1) 00:11:47.524 6658.879 - 6685.198: 0.0355% ( 4) 00:11:47.524 6685.198 - 6711.518: 0.0497% ( 2) 00:11:47.524 6711.518 - 6737.838: 0.0639% ( 2) 00:11:47.524 6737.838 - 6790.477: 0.0923% ( 4) 00:11:47.524 6790.477 - 6843.116: 0.1278% ( 5) 00:11:47.524 6843.116 - 6895.756: 0.1918% ( 9) 00:11:47.524 6895.756 - 6948.395: 0.2415% ( 7) 00:11:47.524 6948.395 - 7001.035: 0.3267% ( 12) 00:11:47.524 7001.035 - 7053.674: 0.4403% ( 16) 00:11:47.524 7053.674 - 7106.313: 0.5966% ( 22) 00:11:47.524 7106.313 - 7158.953: 0.7457% ( 21) 00:11:47.524 7158.953 - 7211.592: 0.8878% ( 20) 00:11:47.524 7211.592 - 7264.231: 1.0440% ( 22) 00:11:47.524 7264.231 - 7316.871: 1.2003% ( 22) 00:11:47.524 7316.871 - 7369.510: 1.3565% ( 22) 00:11:47.524 7369.510 - 7422.149: 1.5341% ( 25) 00:11:47.524 7422.149 - 7474.789: 1.7188% ( 26) 00:11:47.524 7474.789 - 7527.428: 1.8892% ( 24) 00:11:47.524 7527.428 - 7580.067: 2.0597% ( 24) 00:11:47.524 7580.067 - 7632.707: 2.2443% ( 26) 00:11:47.524 7632.707 - 7685.346: 2.4929% ( 35) 00:11:47.524 7685.346 - 7737.986: 2.9830% ( 69) 00:11:47.524 7737.986 - 7790.625: 3.8778% ( 126) 00:11:47.524 7790.625 - 7843.264: 5.3125% ( 202) 00:11:47.524 7843.264 - 7895.904: 6.9460% ( 230) 00:11:47.524 7895.904 - 7948.543: 9.2259% ( 321) 00:11:47.524 7948.543 - 8001.182: 11.7330% ( 353) 00:11:47.524 8001.182 - 8053.822: 14.4318% ( 380) 00:11:47.524 8053.822 - 8106.461: 17.1733% ( 386) 00:11:47.524 8106.461 - 8159.100: 20.0000% ( 398) 00:11:47.524 8159.100 - 8211.740: 22.8196% ( 397) 00:11:47.524 8211.740 - 8264.379: 25.6321% ( 396) 00:11:47.524 8264.379 - 8317.018: 28.4162% ( 392) 00:11:47.524 8317.018 - 8369.658: 31.1719% ( 388) 00:11:47.524 8369.658 - 8422.297: 33.9134% ( 386) 00:11:47.524 8422.297 - 8474.937: 36.7188% ( 395) 00:11:47.524 8474.937 - 8527.576: 39.4531% ( 385) 00:11:47.524 8527.576 - 8580.215: 42.1520% ( 380) 00:11:47.524 8580.215 - 8632.855: 44.8011% ( 373) 00:11:47.524 8632.855 - 8685.494: 47.4645% ( 375) 00:11:47.524 8685.494 - 8738.133: 50.1420% ( 377) 00:11:47.524 8738.133 - 8790.773: 52.8906% ( 387) 00:11:47.524 8790.773 - 8843.412: 55.6534% ( 389) 00:11:47.524 8843.412 - 8896.051: 58.3523% ( 380) 00:11:47.524 8896.051 - 8948.691: 61.0369% ( 378) 00:11:47.524 8948.691 - 9001.330: 63.7358% ( 380) 00:11:47.524 9001.330 - 9053.969: 66.3991% ( 375) 00:11:47.524 9053.969 - 9106.609: 69.0838% ( 378) 00:11:47.524 9106.609 - 9159.248: 71.7259% ( 372) 00:11:47.524 9159.248 - 9211.888: 74.4176% ( 379) 00:11:47.524 9211.888 - 9264.527: 77.1094% ( 379) 00:11:47.524 9264.527 - 9317.166: 79.8011% ( 379) 00:11:47.524 9317.166 - 9369.806: 82.4432% ( 372) 00:11:47.524 9369.806 - 9422.445: 85.0000% ( 360) 00:11:47.524 9422.445 - 9475.084: 87.3580% ( 332) 00:11:47.524 9475.084 - 9527.724: 89.3821% ( 285) 00:11:47.524 9527.724 - 9580.363: 91.1719% ( 252) 00:11:47.524 9580.363 - 9633.002: 92.4290% ( 177) 00:11:47.524 9633.002 - 9685.642: 93.2599% ( 117) 00:11:47.524 9685.642 - 9738.281: 93.8778% ( 87) 00:11:47.524 9738.281 - 9790.920: 94.3253% ( 63) 00:11:47.524 9790.920 - 9843.560: 94.5668% ( 34) 00:11:47.524 9843.560 - 9896.199: 94.7443% ( 25) 00:11:47.524 9896.199 - 9948.839: 94.8935% ( 21) 00:11:47.524 9948.839 - 10001.478: 95.0071% ( 16) 00:11:47.524 10001.478 - 10054.117: 95.0852% ( 11) 00:11:47.524 10054.117 - 10106.757: 95.1918% ( 15) 00:11:47.524 10106.757 - 10159.396: 95.2770% ( 12) 00:11:47.524 10159.396 - 10212.035: 95.3480% ( 10) 00:11:47.524 10212.035 - 10264.675: 95.4332% ( 12) 00:11:47.524 10264.675 - 10317.314: 95.5185% ( 12) 00:11:47.524 10317.314 - 10369.953: 95.5895% ( 10) 00:11:47.524 10369.953 - 10422.593: 95.6747% ( 12) 00:11:47.524 10422.593 - 10475.232: 95.7457% ( 10) 00:11:47.524 10475.232 - 10527.871: 95.7884% ( 6) 00:11:47.524 10527.871 - 10580.511: 95.8523% ( 9) 00:11:47.524 10580.511 - 10633.150: 95.9162% ( 9) 00:11:47.524 10633.150 - 10685.790: 95.9730% ( 8) 00:11:47.524 10685.790 - 10738.429: 96.0369% ( 9) 00:11:47.524 10738.429 - 10791.068: 96.1080% ( 10) 00:11:47.524 10791.068 - 10843.708: 96.1719% ( 9) 00:11:47.524 10843.708 - 10896.347: 96.2358% ( 9) 00:11:47.524 10896.347 - 10948.986: 96.3068% ( 10) 00:11:47.524 10948.986 - 11001.626: 96.3636% ( 8) 00:11:47.524 11001.626 - 11054.265: 96.4276% ( 9) 00:11:47.524 11054.265 - 11106.904: 96.4986% ( 10) 00:11:47.524 11106.904 - 11159.544: 96.5696% ( 10) 00:11:47.524 11159.544 - 11212.183: 96.6335% ( 9) 00:11:47.524 11212.183 - 11264.822: 96.6690% ( 5) 00:11:47.524 11264.822 - 11317.462: 96.7188% ( 7) 00:11:47.524 11317.462 - 11370.101: 96.7614% ( 6) 00:11:47.524 11370.101 - 11422.741: 96.7969% ( 5) 00:11:47.524 11422.741 - 11475.380: 96.8395% ( 6) 00:11:47.524 11475.380 - 11528.019: 96.8750% ( 5) 00:11:47.524 11528.019 - 11580.659: 96.9034% ( 4) 00:11:47.524 11580.659 - 11633.298: 96.9318% ( 4) 00:11:47.524 11633.298 - 11685.937: 96.9602% ( 4) 00:11:47.524 11685.937 - 11738.577: 96.9744% ( 2) 00:11:47.524 11738.577 - 11791.216: 96.9886% ( 2) 00:11:47.524 11791.216 - 11843.855: 97.0028% ( 2) 00:11:47.524 11843.855 - 11896.495: 97.0241% ( 3) 00:11:47.524 11896.495 - 11949.134: 97.0384% ( 2) 00:11:47.524 11949.134 - 12001.773: 97.0526% ( 2) 00:11:47.524 12001.773 - 12054.413: 97.0668% ( 2) 00:11:47.524 12054.413 - 12107.052: 97.0810% ( 2) 00:11:47.524 12107.052 - 12159.692: 97.0952% ( 2) 00:11:47.524 12159.692 - 12212.331: 97.1165% ( 3) 00:11:47.524 12212.331 - 12264.970: 97.1307% ( 2) 00:11:47.524 12264.970 - 12317.610: 97.1449% ( 2) 00:11:47.524 12317.610 - 12370.249: 97.1591% ( 2) 00:11:47.524 12370.249 - 12422.888: 97.1733% ( 2) 00:11:47.524 12422.888 - 12475.528: 97.1875% ( 2) 00:11:47.524 12475.528 - 12528.167: 97.2017% ( 2) 00:11:47.524 12528.167 - 12580.806: 97.2230% ( 3) 00:11:47.524 12580.806 - 12633.446: 97.2372% ( 2) 00:11:47.524 12633.446 - 12686.085: 97.2514% ( 2) 00:11:47.524 12686.085 - 12738.724: 97.2656% ( 2) 00:11:47.524 12738.724 - 12791.364: 97.2940% ( 4) 00:11:47.524 12791.364 - 12844.003: 97.3082% ( 2) 00:11:47.524 12844.003 - 12896.643: 97.3224% ( 2) 00:11:47.524 12896.643 - 12949.282: 97.3438% ( 3) 00:11:47.524 12949.282 - 13001.921: 97.3580% ( 2) 00:11:47.524 13001.921 - 13054.561: 97.3722% ( 2) 00:11:47.524 13054.561 - 13107.200: 97.3864% ( 2) 00:11:47.524 13107.200 - 13159.839: 97.4006% ( 2) 00:11:47.524 13159.839 - 13212.479: 97.4148% ( 2) 00:11:47.524 13212.479 - 13265.118: 97.4290% ( 2) 00:11:47.524 13265.118 - 13317.757: 97.4787% ( 7) 00:11:47.524 13317.757 - 13370.397: 97.4929% ( 2) 00:11:47.524 13370.397 - 13423.036: 97.5284% ( 5) 00:11:47.524 13423.036 - 13475.676: 97.5426% ( 2) 00:11:47.524 13475.676 - 13580.954: 97.6065% ( 9) 00:11:47.524 13580.954 - 13686.233: 97.6634% ( 8) 00:11:47.524 13686.233 - 13791.512: 97.7060% ( 6) 00:11:47.524 13791.512 - 13896.790: 97.7699% ( 9) 00:11:47.524 13896.790 - 14002.069: 97.8267% ( 8) 00:11:47.524 14002.069 - 14107.348: 97.8835% ( 8) 00:11:47.524 14107.348 - 14212.627: 97.9403% ( 8) 00:11:47.524 14212.627 - 14317.905: 97.9901% ( 7) 00:11:47.524 14317.905 - 14423.184: 98.0398% ( 7) 00:11:47.524 14423.184 - 14528.463: 98.0966% ( 8) 00:11:47.524 14528.463 - 14633.741: 98.1463% ( 7) 00:11:47.524 14633.741 - 14739.020: 98.2031% ( 8) 00:11:47.524 14739.020 - 14844.299: 98.2599% ( 8) 00:11:47.524 14844.299 - 14949.578: 98.3026% ( 6) 00:11:47.524 14949.578 - 15054.856: 98.3594% ( 8) 00:11:47.524 15054.856 - 15160.135: 98.4162% ( 8) 00:11:47.524 15160.135 - 15265.414: 98.4659% ( 7) 00:11:47.524 15265.414 - 15370.692: 98.5156% ( 7) 00:11:47.524 15370.692 - 15475.971: 98.5653% ( 7) 00:11:47.524 15475.971 - 15581.250: 98.6293% ( 9) 00:11:47.524 15581.250 - 15686.529: 98.6790% ( 7) 00:11:47.524 15686.529 - 15791.807: 98.7216% ( 6) 00:11:47.524 15791.807 - 15897.086: 98.7500% ( 4) 00:11:47.524 15897.086 - 16002.365: 98.8068% ( 8) 00:11:47.524 16002.365 - 16107.643: 98.8423% ( 5) 00:11:47.524 16107.643 - 16212.922: 98.8849% ( 6) 00:11:47.524 16212.922 - 16318.201: 98.9205% ( 5) 00:11:47.524 16318.201 - 16423.480: 98.9631% ( 6) 00:11:47.524 16423.480 - 16528.758: 98.9986% ( 5) 00:11:47.524 16528.758 - 16634.037: 99.0412% ( 6) 00:11:47.524 16634.037 - 16739.316: 99.0767% ( 5) 00:11:47.524 16739.316 - 16844.594: 99.0909% ( 2) 00:11:47.524 33478.631 - 33689.189: 99.1193% ( 4) 00:11:47.524 33689.189 - 33899.746: 99.1619% ( 6) 00:11:47.524 33899.746 - 34110.304: 99.1974% ( 5) 00:11:47.524 34110.304 - 34320.861: 99.2330% ( 5) 00:11:47.524 34320.861 - 34531.418: 99.2756% ( 6) 00:11:47.524 34531.418 - 34741.976: 99.3111% ( 5) 00:11:47.524 34741.976 - 34952.533: 99.3537% ( 6) 00:11:47.524 34952.533 - 35163.091: 99.3892% ( 5) 00:11:47.525 35163.091 - 35373.648: 99.4247% ( 5) 00:11:47.525 35373.648 - 35584.206: 99.4673% ( 6) 00:11:47.525 35584.206 - 35794.763: 99.5028% ( 5) 00:11:47.525 35794.763 - 36005.320: 99.5526% ( 7) 00:11:47.525 36005.320 - 36215.878: 99.5881% ( 5) 00:11:47.525 36215.878 - 36426.435: 99.6307% ( 6) 00:11:47.525 36426.435 - 36636.993: 99.6733% ( 6) 00:11:47.525 36636.993 - 36847.550: 99.7159% ( 6) 00:11:47.525 36847.550 - 37058.108: 99.7585% ( 6) 00:11:47.525 37058.108 - 37268.665: 99.7940% ( 5) 00:11:47.525 37268.665 - 37479.222: 99.8366% ( 6) 00:11:47.525 37479.222 - 37689.780: 99.8864% ( 7) 00:11:47.525 37689.780 - 37900.337: 99.9219% ( 5) 00:11:47.525 37900.337 - 38110.895: 99.9645% ( 6) 00:11:47.525 38110.895 - 38321.452: 100.0000% ( 5) 00:11:47.525 00:11:47.525 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:47.525 ============================================================================== 00:11:47.525 Range in us Cumulative IO count 00:11:47.525 6843.116 - 6895.756: 0.0355% ( 5) 00:11:47.525 6895.756 - 6948.395: 0.0781% ( 6) 00:11:47.525 6948.395 - 7001.035: 0.1349% ( 8) 00:11:47.525 7001.035 - 7053.674: 0.2557% ( 17) 00:11:47.525 7053.674 - 7106.313: 0.4261% ( 24) 00:11:47.525 7106.313 - 7158.953: 0.5611% ( 19) 00:11:47.525 7158.953 - 7211.592: 0.7173% ( 22) 00:11:47.525 7211.592 - 7264.231: 0.8594% ( 20) 00:11:47.525 7264.231 - 7316.871: 1.0156% ( 22) 00:11:47.525 7316.871 - 7369.510: 1.1719% ( 22) 00:11:47.525 7369.510 - 7422.149: 1.3352% ( 23) 00:11:47.525 7422.149 - 7474.789: 1.5412% ( 29) 00:11:47.525 7474.789 - 7527.428: 1.7116% ( 24) 00:11:47.525 7527.428 - 7580.067: 1.8963% ( 26) 00:11:47.525 7580.067 - 7632.707: 2.1165% ( 31) 00:11:47.525 7632.707 - 7685.346: 2.4077% ( 41) 00:11:47.525 7685.346 - 7737.986: 2.9688% ( 79) 00:11:47.525 7737.986 - 7790.625: 3.8068% ( 118) 00:11:47.525 7790.625 - 7843.264: 5.0852% ( 180) 00:11:47.525 7843.264 - 7895.904: 7.0384% ( 275) 00:11:47.525 7895.904 - 7948.543: 9.3253% ( 322) 00:11:47.525 7948.543 - 8001.182: 11.9247% ( 366) 00:11:47.525 8001.182 - 8053.822: 14.6733% ( 387) 00:11:47.525 8053.822 - 8106.461: 17.4219% ( 387) 00:11:47.525 8106.461 - 8159.100: 20.2060% ( 392) 00:11:47.525 8159.100 - 8211.740: 23.0327% ( 398) 00:11:47.525 8211.740 - 8264.379: 25.8168% ( 392) 00:11:47.525 8264.379 - 8317.018: 28.6861% ( 404) 00:11:47.525 8317.018 - 8369.658: 31.4702% ( 392) 00:11:47.525 8369.658 - 8422.297: 34.2685% ( 394) 00:11:47.525 8422.297 - 8474.937: 37.0312% ( 389) 00:11:47.525 8474.937 - 8527.576: 39.7940% ( 389) 00:11:47.525 8527.576 - 8580.215: 42.4858% ( 379) 00:11:47.525 8580.215 - 8632.855: 45.2912% ( 395) 00:11:47.525 8632.855 - 8685.494: 48.0185% ( 384) 00:11:47.525 8685.494 - 8738.133: 50.7173% ( 380) 00:11:47.525 8738.133 - 8790.773: 53.3949% ( 377) 00:11:47.525 8790.773 - 8843.412: 56.0866% ( 379) 00:11:47.525 8843.412 - 8896.051: 58.7926% ( 381) 00:11:47.525 8896.051 - 8948.691: 61.5057% ( 382) 00:11:47.525 8948.691 - 9001.330: 64.1548% ( 373) 00:11:47.525 9001.330 - 9053.969: 66.7898% ( 371) 00:11:47.525 9053.969 - 9106.609: 69.4460% ( 374) 00:11:47.525 9106.609 - 9159.248: 72.1023% ( 374) 00:11:47.525 9159.248 - 9211.888: 74.8082% ( 381) 00:11:47.525 9211.888 - 9264.527: 77.4858% ( 377) 00:11:47.525 9264.527 - 9317.166: 80.1562% ( 376) 00:11:47.525 9317.166 - 9369.806: 82.7557% ( 366) 00:11:47.525 9369.806 - 9422.445: 85.3054% ( 359) 00:11:47.525 9422.445 - 9475.084: 87.7415% ( 343) 00:11:47.525 9475.084 - 9527.724: 89.8011% ( 290) 00:11:47.525 9527.724 - 9580.363: 91.4986% ( 239) 00:11:47.525 9580.363 - 9633.002: 92.8196% ( 186) 00:11:47.525 9633.002 - 9685.642: 93.7216% ( 127) 00:11:47.525 9685.642 - 9738.281: 94.2685% ( 77) 00:11:47.525 9738.281 - 9790.920: 94.5810% ( 44) 00:11:47.525 9790.920 - 9843.560: 94.8224% ( 34) 00:11:47.525 9843.560 - 9896.199: 94.9929% ( 24) 00:11:47.525 9896.199 - 9948.839: 95.1207% ( 18) 00:11:47.525 9948.839 - 10001.478: 95.1918% ( 10) 00:11:47.525 10001.478 - 10054.117: 95.2486% ( 8) 00:11:47.525 10054.117 - 10106.757: 95.2983% ( 7) 00:11:47.525 10106.757 - 10159.396: 95.3551% ( 8) 00:11:47.525 10159.396 - 10212.035: 95.4190% ( 9) 00:11:47.525 10212.035 - 10264.675: 95.4688% ( 7) 00:11:47.525 10264.675 - 10317.314: 95.5327% ( 9) 00:11:47.525 10317.314 - 10369.953: 95.5824% ( 7) 00:11:47.525 10369.953 - 10422.593: 95.6392% ( 8) 00:11:47.525 10422.593 - 10475.232: 95.7102% ( 10) 00:11:47.525 10475.232 - 10527.871: 95.7670% ( 8) 00:11:47.525 10527.871 - 10580.511: 95.8452% ( 11) 00:11:47.525 10580.511 - 10633.150: 95.9091% ( 9) 00:11:47.525 10633.150 - 10685.790: 95.9801% ( 10) 00:11:47.525 10685.790 - 10738.429: 96.0511% ( 10) 00:11:47.525 10738.429 - 10791.068: 96.1222% ( 10) 00:11:47.525 10791.068 - 10843.708: 96.1861% ( 9) 00:11:47.525 10843.708 - 10896.347: 96.2500% ( 9) 00:11:47.525 10896.347 - 10948.986: 96.3281% ( 11) 00:11:47.525 10948.986 - 11001.626: 96.3991% ( 10) 00:11:47.525 11001.626 - 11054.265: 96.4560% ( 8) 00:11:47.525 11054.265 - 11106.904: 96.5057% ( 7) 00:11:47.525 11106.904 - 11159.544: 96.5412% ( 5) 00:11:47.525 11159.544 - 11212.183: 96.5696% ( 4) 00:11:47.525 11212.183 - 11264.822: 96.5909% ( 3) 00:11:47.525 11264.822 - 11317.462: 96.6051% ( 2) 00:11:47.525 11317.462 - 11370.101: 96.6193% ( 2) 00:11:47.525 11370.101 - 11422.741: 96.6335% ( 2) 00:11:47.525 11422.741 - 11475.380: 96.6477% ( 2) 00:11:47.525 11475.380 - 11528.019: 96.6619% ( 2) 00:11:47.525 11528.019 - 11580.659: 96.6761% ( 2) 00:11:47.525 11580.659 - 11633.298: 96.6974% ( 3) 00:11:47.525 11633.298 - 11685.937: 96.7188% ( 3) 00:11:47.525 11685.937 - 11738.577: 96.7472% ( 4) 00:11:47.525 11738.577 - 11791.216: 96.7827% ( 5) 00:11:47.525 11791.216 - 11843.855: 96.8182% ( 5) 00:11:47.525 11843.855 - 11896.495: 96.8395% ( 3) 00:11:47.525 11896.495 - 11949.134: 96.8892% ( 7) 00:11:47.525 11949.134 - 12001.773: 96.9176% ( 4) 00:11:47.525 12001.773 - 12054.413: 96.9531% ( 5) 00:11:47.525 12054.413 - 12107.052: 96.9744% ( 3) 00:11:47.525 12107.052 - 12159.692: 97.0028% ( 4) 00:11:47.525 12159.692 - 12212.331: 97.0312% ( 4) 00:11:47.525 12212.331 - 12264.970: 97.0668% ( 5) 00:11:47.525 12264.970 - 12317.610: 97.0952% ( 4) 00:11:47.525 12317.610 - 12370.249: 97.1307% ( 5) 00:11:47.525 12370.249 - 12422.888: 97.1591% ( 4) 00:11:47.525 12422.888 - 12475.528: 97.1875% ( 4) 00:11:47.525 12475.528 - 12528.167: 97.2372% ( 7) 00:11:47.525 12528.167 - 12580.806: 97.2798% ( 6) 00:11:47.525 12580.806 - 12633.446: 97.3153% ( 5) 00:11:47.525 12633.446 - 12686.085: 97.3722% ( 8) 00:11:47.525 12686.085 - 12738.724: 97.4148% ( 6) 00:11:47.525 12738.724 - 12791.364: 97.4574% ( 6) 00:11:47.525 12791.364 - 12844.003: 97.5000% ( 6) 00:11:47.525 12844.003 - 12896.643: 97.5497% ( 7) 00:11:47.525 12896.643 - 12949.282: 97.5923% ( 6) 00:11:47.525 12949.282 - 13001.921: 97.6420% ( 7) 00:11:47.525 13001.921 - 13054.561: 97.6776% ( 5) 00:11:47.525 13054.561 - 13107.200: 97.7131% ( 5) 00:11:47.525 13107.200 - 13159.839: 97.7557% ( 6) 00:11:47.525 13159.839 - 13212.479: 97.8054% ( 7) 00:11:47.525 13212.479 - 13265.118: 97.8480% ( 6) 00:11:47.525 13265.118 - 13317.757: 97.8906% ( 6) 00:11:47.525 13317.757 - 13370.397: 97.9332% ( 6) 00:11:47.525 13370.397 - 13423.036: 97.9688% ( 5) 00:11:47.525 13423.036 - 13475.676: 98.0043% ( 5) 00:11:47.525 13475.676 - 13580.954: 98.0966% ( 13) 00:11:47.525 13580.954 - 13686.233: 98.1889% ( 13) 00:11:47.525 13686.233 - 13791.512: 98.2528% ( 9) 00:11:47.525 13791.512 - 13896.790: 98.3097% ( 8) 00:11:47.525 13896.790 - 14002.069: 98.3736% ( 9) 00:11:47.525 14002.069 - 14107.348: 98.4304% ( 8) 00:11:47.525 14107.348 - 14212.627: 98.4872% ( 8) 00:11:47.525 14212.627 - 14317.905: 98.5511% ( 9) 00:11:47.525 14317.905 - 14423.184: 98.6009% ( 7) 00:11:47.525 14423.184 - 14528.463: 98.6648% ( 9) 00:11:47.525 14528.463 - 14633.741: 98.6932% ( 4) 00:11:47.525 14633.741 - 14739.020: 98.7216% ( 4) 00:11:47.525 14739.020 - 14844.299: 98.7429% ( 3) 00:11:47.525 14844.299 - 14949.578: 98.7713% ( 4) 00:11:47.525 14949.578 - 15054.856: 98.7997% ( 4) 00:11:47.525 15054.856 - 15160.135: 98.8210% ( 3) 00:11:47.525 15160.135 - 15265.414: 98.8423% ( 3) 00:11:47.526 15265.414 - 15370.692: 98.8707% ( 4) 00:11:47.526 15370.692 - 15475.971: 98.8920% ( 3) 00:11:47.526 15475.971 - 15581.250: 98.9205% ( 4) 00:11:47.526 15581.250 - 15686.529: 98.9489% ( 4) 00:11:47.526 15686.529 - 15791.807: 98.9702% ( 3) 00:11:47.526 15791.807 - 15897.086: 98.9915% ( 3) 00:11:47.526 15897.086 - 16002.365: 99.0128% ( 3) 00:11:47.526 16002.365 - 16107.643: 99.0341% ( 3) 00:11:47.526 16107.643 - 16212.922: 99.0554% ( 3) 00:11:47.526 16212.922 - 16318.201: 99.0838% ( 4) 00:11:47.526 16318.201 - 16423.480: 99.0909% ( 1) 00:11:47.526 34110.304 - 34320.861: 99.1335% ( 6) 00:11:47.526 34320.861 - 34531.418: 99.1690% ( 5) 00:11:47.526 34531.418 - 34741.976: 99.2045% ( 5) 00:11:47.526 34741.976 - 34952.533: 99.2472% ( 6) 00:11:47.526 34952.533 - 35163.091: 99.2898% ( 6) 00:11:47.526 35163.091 - 35373.648: 99.3324% ( 6) 00:11:47.526 35373.648 - 35584.206: 99.3679% ( 5) 00:11:47.526 35584.206 - 35794.763: 99.4105% ( 6) 00:11:47.526 35794.763 - 36005.320: 99.4531% ( 6) 00:11:47.526 36005.320 - 36215.878: 99.4886% ( 5) 00:11:47.526 36215.878 - 36426.435: 99.5312% ( 6) 00:11:47.526 36426.435 - 36636.993: 99.5668% ( 5) 00:11:47.526 36636.993 - 36847.550: 99.6023% ( 5) 00:11:47.526 36847.550 - 37058.108: 99.6449% ( 6) 00:11:47.526 37058.108 - 37268.665: 99.6733% ( 4) 00:11:47.526 37268.665 - 37479.222: 99.7159% ( 6) 00:11:47.526 37479.222 - 37689.780: 99.7585% ( 6) 00:11:47.526 37689.780 - 37900.337: 99.7940% ( 5) 00:11:47.526 37900.337 - 38110.895: 99.8295% ( 5) 00:11:47.526 38110.895 - 38321.452: 99.8793% ( 7) 00:11:47.526 38321.452 - 38532.010: 99.9148% ( 5) 00:11:47.526 38532.010 - 38742.567: 99.9574% ( 6) 00:11:47.526 38742.567 - 38953.124: 100.0000% ( 6) 00:11:47.526 00:11:47.526 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:47.526 ============================================================================== 00:11:47.526 Range in us Cumulative IO count 00:11:47.526 6895.756 - 6948.395: 0.0426% ( 6) 00:11:47.526 6948.395 - 7001.035: 0.1349% ( 13) 00:11:47.526 7001.035 - 7053.674: 0.2202% ( 12) 00:11:47.526 7053.674 - 7106.313: 0.3480% ( 18) 00:11:47.526 7106.313 - 7158.953: 0.4759% ( 18) 00:11:47.526 7158.953 - 7211.592: 0.6037% ( 18) 00:11:47.526 7211.592 - 7264.231: 0.7244% ( 17) 00:11:47.526 7264.231 - 7316.871: 0.8523% ( 18) 00:11:47.526 7316.871 - 7369.510: 1.0440% ( 27) 00:11:47.526 7369.510 - 7422.149: 1.2429% ( 28) 00:11:47.526 7422.149 - 7474.789: 1.4560% ( 30) 00:11:47.526 7474.789 - 7527.428: 1.6690% ( 30) 00:11:47.526 7527.428 - 7580.067: 1.8608% ( 27) 00:11:47.526 7580.067 - 7632.707: 2.0810% ( 31) 00:11:47.526 7632.707 - 7685.346: 2.4716% ( 55) 00:11:47.526 7685.346 - 7737.986: 2.9901% ( 73) 00:11:47.526 7737.986 - 7790.625: 3.8636% ( 123) 00:11:47.526 7790.625 - 7843.264: 5.2699% ( 198) 00:11:47.526 7843.264 - 7895.904: 7.0739% ( 254) 00:11:47.526 7895.904 - 7948.543: 9.3253% ( 317) 00:11:47.526 7948.543 - 8001.182: 11.8040% ( 349) 00:11:47.526 8001.182 - 8053.822: 14.4531% ( 373) 00:11:47.526 8053.822 - 8106.461: 17.2372% ( 392) 00:11:47.526 8106.461 - 8159.100: 20.0994% ( 403) 00:11:47.526 8159.100 - 8211.740: 22.8551% ( 388) 00:11:47.526 8211.740 - 8264.379: 25.7102% ( 402) 00:11:47.526 8264.379 - 8317.018: 28.5724% ( 403) 00:11:47.526 8317.018 - 8369.658: 31.3636% ( 393) 00:11:47.526 8369.658 - 8422.297: 34.2472% ( 406) 00:11:47.526 8422.297 - 8474.937: 36.9318% ( 378) 00:11:47.526 8474.937 - 8527.576: 39.7869% ( 402) 00:11:47.526 8527.576 - 8580.215: 42.5497% ( 389) 00:11:47.526 8580.215 - 8632.855: 45.2770% ( 384) 00:11:47.526 8632.855 - 8685.494: 47.9688% ( 379) 00:11:47.526 8685.494 - 8738.133: 50.7102% ( 386) 00:11:47.526 8738.133 - 8790.773: 53.4020% ( 379) 00:11:47.526 8790.773 - 8843.412: 56.0866% ( 378) 00:11:47.526 8843.412 - 8896.051: 58.7926% ( 381) 00:11:47.526 8896.051 - 8948.691: 61.5412% ( 387) 00:11:47.526 8948.691 - 9001.330: 64.1832% ( 372) 00:11:47.526 9001.330 - 9053.969: 66.8821% ( 380) 00:11:47.526 9053.969 - 9106.609: 69.5099% ( 370) 00:11:47.526 9106.609 - 9159.248: 72.1591% ( 373) 00:11:47.526 9159.248 - 9211.888: 74.8153% ( 374) 00:11:47.526 9211.888 - 9264.527: 77.4929% ( 377) 00:11:47.526 9264.527 - 9317.166: 80.1207% ( 370) 00:11:47.526 9317.166 - 9369.806: 82.7273% ( 367) 00:11:47.526 9369.806 - 9422.445: 85.3338% ( 367) 00:11:47.526 9422.445 - 9475.084: 87.6562% ( 327) 00:11:47.526 9475.084 - 9527.724: 89.7514% ( 295) 00:11:47.526 9527.724 - 9580.363: 91.4702% ( 242) 00:11:47.526 9580.363 - 9633.002: 92.7699% ( 183) 00:11:47.526 9633.002 - 9685.642: 93.6861% ( 129) 00:11:47.526 9685.642 - 9738.281: 94.2472% ( 79) 00:11:47.526 9738.281 - 9790.920: 94.6307% ( 54) 00:11:47.526 9790.920 - 9843.560: 94.8935% ( 37) 00:11:47.526 9843.560 - 9896.199: 95.0355% ( 20) 00:11:47.526 9896.199 - 9948.839: 95.0923% ( 8) 00:11:47.526 9948.839 - 10001.478: 95.1562% ( 9) 00:11:47.526 10001.478 - 10054.117: 95.2202% ( 9) 00:11:47.526 10054.117 - 10106.757: 95.2699% ( 7) 00:11:47.526 10106.757 - 10159.396: 95.3267% ( 8) 00:11:47.526 10159.396 - 10212.035: 95.3764% ( 7) 00:11:47.526 10212.035 - 10264.675: 95.4261% ( 7) 00:11:47.526 10264.675 - 10317.314: 95.4830% ( 8) 00:11:47.526 10317.314 - 10369.953: 95.5398% ( 8) 00:11:47.526 10369.953 - 10422.593: 95.5966% ( 8) 00:11:47.526 10422.593 - 10475.232: 95.6534% ( 8) 00:11:47.526 10475.232 - 10527.871: 95.7173% ( 9) 00:11:47.526 10527.871 - 10580.511: 95.7741% ( 8) 00:11:47.526 10580.511 - 10633.150: 95.8239% ( 7) 00:11:47.526 10633.150 - 10685.790: 95.8807% ( 8) 00:11:47.526 10685.790 - 10738.429: 95.9375% ( 8) 00:11:47.526 10738.429 - 10791.068: 95.9872% ( 7) 00:11:47.526 10791.068 - 10843.708: 96.0511% ( 9) 00:11:47.526 10843.708 - 10896.347: 96.0866% ( 5) 00:11:47.526 10896.347 - 10948.986: 96.1151% ( 4) 00:11:47.526 10948.986 - 11001.626: 96.1506% ( 5) 00:11:47.526 11001.626 - 11054.265: 96.1861% ( 5) 00:11:47.526 11054.265 - 11106.904: 96.2145% ( 4) 00:11:47.526 11106.904 - 11159.544: 96.2571% ( 6) 00:11:47.526 11159.544 - 11212.183: 96.2926% ( 5) 00:11:47.526 11212.183 - 11264.822: 96.3210% ( 4) 00:11:47.526 11264.822 - 11317.462: 96.3565% ( 5) 00:11:47.526 11317.462 - 11370.101: 96.3849% ( 4) 00:11:47.526 11370.101 - 11422.741: 96.4205% ( 5) 00:11:47.526 11422.741 - 11475.380: 96.4560% ( 5) 00:11:47.526 11475.380 - 11528.019: 96.4844% ( 4) 00:11:47.526 11528.019 - 11580.659: 96.5341% ( 7) 00:11:47.526 11580.659 - 11633.298: 96.5838% ( 7) 00:11:47.526 11633.298 - 11685.937: 96.6690% ( 12) 00:11:47.526 11685.937 - 11738.577: 96.7188% ( 7) 00:11:47.526 11738.577 - 11791.216: 96.7756% ( 8) 00:11:47.526 11791.216 - 11843.855: 96.8182% ( 6) 00:11:47.526 11843.855 - 11896.495: 96.8608% ( 6) 00:11:47.526 11896.495 - 11949.134: 96.9034% ( 6) 00:11:47.526 11949.134 - 12001.773: 96.9460% ( 6) 00:11:47.526 12001.773 - 12054.413: 96.9815% ( 5) 00:11:47.526 12054.413 - 12107.052: 97.0241% ( 6) 00:11:47.526 12107.052 - 12159.692: 97.0739% ( 7) 00:11:47.526 12159.692 - 12212.331: 97.1165% ( 6) 00:11:47.526 12212.331 - 12264.970: 97.1591% ( 6) 00:11:47.526 12264.970 - 12317.610: 97.2017% ( 6) 00:11:47.526 12317.610 - 12370.249: 97.2443% ( 6) 00:11:47.526 12370.249 - 12422.888: 97.2869% ( 6) 00:11:47.526 12422.888 - 12475.528: 97.3295% ( 6) 00:11:47.526 12475.528 - 12528.167: 97.3793% ( 7) 00:11:47.526 12528.167 - 12580.806: 97.4148% ( 5) 00:11:47.526 12580.806 - 12633.446: 97.4574% ( 6) 00:11:47.526 12633.446 - 12686.085: 97.5000% ( 6) 00:11:47.526 12686.085 - 12738.724: 97.5497% ( 7) 00:11:47.526 12738.724 - 12791.364: 97.5852% ( 5) 00:11:47.526 12791.364 - 12844.003: 97.6349% ( 7) 00:11:47.526 12844.003 - 12896.643: 97.6776% ( 6) 00:11:47.526 12896.643 - 12949.282: 97.7202% ( 6) 00:11:47.526 12949.282 - 13001.921: 97.7699% ( 7) 00:11:47.526 13001.921 - 13054.561: 97.8125% ( 6) 00:11:47.526 13054.561 - 13107.200: 97.8551% ( 6) 00:11:47.526 13107.200 - 13159.839: 97.9048% ( 7) 00:11:47.526 13159.839 - 13212.479: 97.9474% ( 6) 00:11:47.526 13212.479 - 13265.118: 97.9972% ( 7) 00:11:47.526 13265.118 - 13317.757: 98.0398% ( 6) 00:11:47.526 13317.757 - 13370.397: 98.0824% ( 6) 00:11:47.526 13370.397 - 13423.036: 98.1250% ( 6) 00:11:47.526 13423.036 - 13475.676: 98.1747% ( 7) 00:11:47.526 13475.676 - 13580.954: 98.2670% ( 13) 00:11:47.526 13580.954 - 13686.233: 98.3452% ( 11) 00:11:47.526 13686.233 - 13791.512: 98.4304% ( 12) 00:11:47.526 13791.512 - 13896.790: 98.4872% ( 8) 00:11:47.526 13896.790 - 14002.069: 98.5298% ( 6) 00:11:47.526 14002.069 - 14107.348: 98.5866% ( 8) 00:11:47.527 14107.348 - 14212.627: 98.6293% ( 6) 00:11:47.527 14212.627 - 14317.905: 98.6932% ( 9) 00:11:47.527 14317.905 - 14423.184: 98.7429% ( 7) 00:11:47.527 14423.184 - 14528.463: 98.7997% ( 8) 00:11:47.527 14528.463 - 14633.741: 98.8565% ( 8) 00:11:47.527 14633.741 - 14739.020: 98.9062% ( 7) 00:11:47.527 14739.020 - 14844.299: 98.9631% ( 8) 00:11:47.527 14844.299 - 14949.578: 98.9915% ( 4) 00:11:47.527 14949.578 - 15054.856: 99.0128% ( 3) 00:11:47.527 15054.856 - 15160.135: 99.0412% ( 4) 00:11:47.527 15160.135 - 15265.414: 99.0625% ( 3) 00:11:47.527 15265.414 - 15370.692: 99.0909% ( 4) 00:11:47.527 32846.959 - 33057.516: 99.1051% ( 2) 00:11:47.527 33057.516 - 33268.074: 99.1406% ( 5) 00:11:47.527 33268.074 - 33478.631: 99.1761% ( 5) 00:11:47.527 33478.631 - 33689.189: 99.2188% ( 6) 00:11:47.527 33689.189 - 33899.746: 99.2543% ( 5) 00:11:47.527 33899.746 - 34110.304: 99.3040% ( 7) 00:11:47.527 34110.304 - 34320.861: 99.3466% ( 6) 00:11:47.527 34320.861 - 34531.418: 99.3821% ( 5) 00:11:47.527 34531.418 - 34741.976: 99.4247% ( 6) 00:11:47.527 34741.976 - 34952.533: 99.4673% ( 6) 00:11:47.527 34952.533 - 35163.091: 99.5099% ( 6) 00:11:47.527 35163.091 - 35373.648: 99.5455% ( 5) 00:11:47.527 35373.648 - 35584.206: 99.5881% ( 6) 00:11:47.527 35584.206 - 35794.763: 99.6236% ( 5) 00:11:47.527 35794.763 - 36005.320: 99.6662% ( 6) 00:11:47.527 36005.320 - 36215.878: 99.6946% ( 4) 00:11:47.527 36215.878 - 36426.435: 99.7301% ( 5) 00:11:47.527 36426.435 - 36636.993: 99.7727% ( 6) 00:11:47.527 36636.993 - 36847.550: 99.8153% ( 6) 00:11:47.527 36847.550 - 37058.108: 99.8509% ( 5) 00:11:47.527 37058.108 - 37268.665: 99.8864% ( 5) 00:11:47.527 37268.665 - 37479.222: 99.9219% ( 5) 00:11:47.527 37479.222 - 37689.780: 99.9574% ( 5) 00:11:47.527 37689.780 - 37900.337: 100.0000% ( 6) 00:11:47.527 00:11:47.527 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:47.527 ============================================================================== 00:11:47.527 Range in us Cumulative IO count 00:11:47.527 6606.239 - 6632.559: 0.0142% ( 2) 00:11:47.527 6632.559 - 6658.879: 0.0213% ( 1) 00:11:47.527 6658.879 - 6685.198: 0.0426% ( 3) 00:11:47.527 6685.198 - 6711.518: 0.0568% ( 2) 00:11:47.527 6711.518 - 6737.838: 0.0710% ( 2) 00:11:47.527 6737.838 - 6790.477: 0.0994% ( 4) 00:11:47.527 6790.477 - 6843.116: 0.1278% ( 4) 00:11:47.527 6843.116 - 6895.756: 0.1776% ( 7) 00:11:47.527 6895.756 - 6948.395: 0.2486% ( 10) 00:11:47.527 6948.395 - 7001.035: 0.3409% ( 13) 00:11:47.527 7001.035 - 7053.674: 0.4332% ( 13) 00:11:47.527 7053.674 - 7106.313: 0.5327% ( 14) 00:11:47.527 7106.313 - 7158.953: 0.6534% ( 17) 00:11:47.527 7158.953 - 7211.592: 0.7884% ( 19) 00:11:47.527 7211.592 - 7264.231: 0.9091% ( 17) 00:11:47.527 7264.231 - 7316.871: 1.0369% ( 18) 00:11:47.528 7316.871 - 7369.510: 1.1648% ( 18) 00:11:47.528 7369.510 - 7422.149: 1.3352% ( 24) 00:11:47.528 7422.149 - 7474.789: 1.4773% ( 20) 00:11:47.528 7474.789 - 7527.428: 1.6619% ( 26) 00:11:47.528 7527.428 - 7580.067: 1.8679% ( 29) 00:11:47.528 7580.067 - 7632.707: 2.1023% ( 33) 00:11:47.528 7632.707 - 7685.346: 2.4787% ( 53) 00:11:47.528 7685.346 - 7737.986: 2.9545% ( 67) 00:11:47.528 7737.986 - 7790.625: 3.8281% ( 123) 00:11:47.528 7790.625 - 7843.264: 5.0568% ( 173) 00:11:47.528 7843.264 - 7895.904: 6.8750% ( 256) 00:11:47.528 7895.904 - 7948.543: 9.1406% ( 319) 00:11:47.528 7948.543 - 8001.182: 11.5838% ( 344) 00:11:47.528 8001.182 - 8053.822: 14.3182% ( 385) 00:11:47.528 8053.822 - 8106.461: 17.1236% ( 395) 00:11:47.528 8106.461 - 8159.100: 19.9290% ( 395) 00:11:47.528 8159.100 - 8211.740: 22.7060% ( 391) 00:11:47.528 8211.740 - 8264.379: 25.5966% ( 407) 00:11:47.528 8264.379 - 8317.018: 28.4659% ( 404) 00:11:47.528 8317.018 - 8369.658: 31.2784% ( 396) 00:11:47.528 8369.658 - 8422.297: 34.0696% ( 393) 00:11:47.528 8422.297 - 8474.937: 36.8608% ( 393) 00:11:47.528 8474.937 - 8527.576: 39.6378% ( 391) 00:11:47.528 8527.576 - 8580.215: 42.4219% ( 392) 00:11:47.528 8580.215 - 8632.855: 45.2273% ( 395) 00:11:47.528 8632.855 - 8685.494: 47.9332% ( 381) 00:11:47.528 8685.494 - 8738.133: 50.6534% ( 383) 00:11:47.528 8738.133 - 8790.773: 53.3594% ( 381) 00:11:47.528 8790.773 - 8843.412: 56.1364% ( 391) 00:11:47.528 8843.412 - 8896.051: 58.8849% ( 387) 00:11:47.528 8896.051 - 8948.691: 61.6406% ( 388) 00:11:47.528 8948.691 - 9001.330: 64.3821% ( 386) 00:11:47.528 9001.330 - 9053.969: 67.1023% ( 383) 00:11:47.528 9053.969 - 9106.609: 69.7940% ( 379) 00:11:47.528 9106.609 - 9159.248: 72.4716% ( 377) 00:11:47.528 9159.248 - 9211.888: 75.1491% ( 377) 00:11:47.528 9211.888 - 9264.527: 77.7628% ( 368) 00:11:47.528 9264.527 - 9317.166: 80.4190% ( 374) 00:11:47.528 9317.166 - 9369.806: 82.9972% ( 363) 00:11:47.528 9369.806 - 9422.445: 85.5114% ( 354) 00:11:47.528 9422.445 - 9475.084: 87.7983% ( 322) 00:11:47.528 9475.084 - 9527.724: 89.9361% ( 301) 00:11:47.528 9527.724 - 9580.363: 91.6122% ( 236) 00:11:47.528 9580.363 - 9633.002: 92.8338% ( 172) 00:11:47.528 9633.002 - 9685.642: 93.6293% ( 112) 00:11:47.528 9685.642 - 9738.281: 94.1832% ( 78) 00:11:47.528 9738.281 - 9790.920: 94.5597% ( 53) 00:11:47.528 9790.920 - 9843.560: 94.7798% ( 31) 00:11:47.528 9843.560 - 9896.199: 94.9148% ( 19) 00:11:47.528 9896.199 - 9948.839: 94.9929% ( 11) 00:11:47.528 9948.839 - 10001.478: 95.0710% ( 11) 00:11:47.528 10001.478 - 10054.117: 95.1278% ( 8) 00:11:47.528 10054.117 - 10106.757: 95.1918% ( 9) 00:11:47.528 10106.757 - 10159.396: 95.2557% ( 9) 00:11:47.528 10159.396 - 10212.035: 95.3338% ( 11) 00:11:47.528 10212.035 - 10264.675: 95.4048% ( 10) 00:11:47.528 10264.675 - 10317.314: 95.4759% ( 10) 00:11:47.528 10317.314 - 10369.953: 95.5611% ( 12) 00:11:47.528 10369.953 - 10422.593: 95.6321% ( 10) 00:11:47.528 10422.593 - 10475.232: 95.6818% ( 7) 00:11:47.528 10475.232 - 10527.871: 95.7599% ( 11) 00:11:47.528 10527.871 - 10580.511: 95.8381% ( 11) 00:11:47.528 10580.511 - 10633.150: 95.9020% ( 9) 00:11:47.528 10633.150 - 10685.790: 95.9730% ( 10) 00:11:47.528 10685.790 - 10738.429: 96.0298% ( 8) 00:11:47.528 10738.429 - 10791.068: 96.0724% ( 6) 00:11:47.528 10791.068 - 10843.708: 96.1222% ( 7) 00:11:47.528 10843.708 - 10896.347: 96.1719% ( 7) 00:11:47.528 10896.347 - 10948.986: 96.2074% ( 5) 00:11:47.528 10948.986 - 11001.626: 96.2571% ( 7) 00:11:47.528 11001.626 - 11054.265: 96.2997% ( 6) 00:11:47.528 11054.265 - 11106.904: 96.3494% ( 7) 00:11:47.528 11106.904 - 11159.544: 96.3920% ( 6) 00:11:47.528 11159.544 - 11212.183: 96.4347% ( 6) 00:11:47.528 11212.183 - 11264.822: 96.4773% ( 6) 00:11:47.528 11264.822 - 11317.462: 96.5270% ( 7) 00:11:47.528 11317.462 - 11370.101: 96.5767% ( 7) 00:11:47.528 11370.101 - 11422.741: 96.6264% ( 7) 00:11:47.528 11422.741 - 11475.380: 96.6690% ( 6) 00:11:47.528 11475.380 - 11528.019: 96.7116% ( 6) 00:11:47.528 11528.019 - 11580.659: 96.7543% ( 6) 00:11:47.528 11580.659 - 11633.298: 96.8040% ( 7) 00:11:47.528 11633.298 - 11685.937: 96.8466% ( 6) 00:11:47.528 11685.937 - 11738.577: 96.8892% ( 6) 00:11:47.529 11738.577 - 11791.216: 96.9389% ( 7) 00:11:47.529 11791.216 - 11843.855: 96.9815% ( 6) 00:11:47.529 11843.855 - 11896.495: 97.0312% ( 7) 00:11:47.529 11896.495 - 11949.134: 97.0810% ( 7) 00:11:47.529 11949.134 - 12001.773: 97.1165% ( 5) 00:11:47.529 12001.773 - 12054.413: 97.1662% ( 7) 00:11:47.529 12054.413 - 12107.052: 97.2088% ( 6) 00:11:47.529 12107.052 - 12159.692: 97.2585% ( 7) 00:11:47.529 12159.692 - 12212.331: 97.3011% ( 6) 00:11:47.529 12212.331 - 12264.970: 97.3366% ( 5) 00:11:47.529 12264.970 - 12317.610: 97.3864% ( 7) 00:11:47.529 12317.610 - 12370.249: 97.4219% ( 5) 00:11:47.529 12370.249 - 12422.888: 97.4432% ( 3) 00:11:47.529 12422.888 - 12475.528: 97.4716% ( 4) 00:11:47.529 12475.528 - 12528.167: 97.5071% ( 5) 00:11:47.529 12528.167 - 12580.806: 97.5355% ( 4) 00:11:47.529 12580.806 - 12633.446: 97.5639% ( 4) 00:11:47.529 12633.446 - 12686.085: 97.5852% ( 3) 00:11:47.529 12686.085 - 12738.724: 97.6278% ( 6) 00:11:47.529 12738.724 - 12791.364: 97.6705% ( 6) 00:11:47.529 12791.364 - 12844.003: 97.7131% ( 6) 00:11:47.529 12844.003 - 12896.643: 97.7557% ( 6) 00:11:47.529 12896.643 - 12949.282: 97.7912% ( 5) 00:11:47.529 12949.282 - 13001.921: 97.8409% ( 7) 00:11:47.529 13001.921 - 13054.561: 97.8835% ( 6) 00:11:47.529 13054.561 - 13107.200: 97.9190% ( 5) 00:11:47.529 13107.200 - 13159.839: 97.9616% ( 6) 00:11:47.529 13159.839 - 13212.479: 98.0043% ( 6) 00:11:47.529 13212.479 - 13265.118: 98.0398% ( 5) 00:11:47.529 13265.118 - 13317.757: 98.0824% ( 6) 00:11:47.529 13317.757 - 13370.397: 98.1037% ( 3) 00:11:47.529 13370.397 - 13423.036: 98.1321% ( 4) 00:11:47.529 13423.036 - 13475.676: 98.1605% ( 4) 00:11:47.529 13475.676 - 13580.954: 98.2173% ( 8) 00:11:47.529 13580.954 - 13686.233: 98.2670% ( 7) 00:11:47.529 13686.233 - 13791.512: 98.3168% ( 7) 00:11:47.529 13791.512 - 13896.790: 98.3807% ( 9) 00:11:47.529 13896.790 - 14002.069: 98.4304% ( 7) 00:11:47.529 14002.069 - 14107.348: 98.4801% ( 7) 00:11:47.529 14107.348 - 14212.627: 98.5369% ( 8) 00:11:47.529 14212.627 - 14317.905: 98.5866% ( 7) 00:11:47.529 14317.905 - 14423.184: 98.6435% ( 8) 00:11:47.529 14423.184 - 14528.463: 98.6861% ( 6) 00:11:47.529 14528.463 - 14633.741: 98.7074% ( 3) 00:11:47.529 14633.741 - 14739.020: 98.7429% ( 5) 00:11:47.529 14739.020 - 14844.299: 98.7713% ( 4) 00:11:47.529 14844.299 - 14949.578: 98.7997% ( 4) 00:11:47.529 14949.578 - 15054.856: 98.8281% ( 4) 00:11:47.529 15054.856 - 15160.135: 98.8636% ( 5) 00:11:47.529 15160.135 - 15265.414: 98.8920% ( 4) 00:11:47.529 15265.414 - 15370.692: 98.9205% ( 4) 00:11:47.529 15370.692 - 15475.971: 98.9489% ( 4) 00:11:47.529 15475.971 - 15581.250: 98.9844% ( 5) 00:11:47.529 15581.250 - 15686.529: 99.0128% ( 4) 00:11:47.529 15686.529 - 15791.807: 99.0412% ( 4) 00:11:47.529 15791.807 - 15897.086: 99.0696% ( 4) 00:11:47.529 15897.086 - 16002.365: 99.0909% ( 3) 00:11:47.529 31583.614 - 31794.172: 99.1193% ( 4) 00:11:47.529 31794.172 - 32004.729: 99.1619% ( 6) 00:11:47.529 32004.729 - 32215.287: 99.1974% ( 5) 00:11:47.529 32215.287 - 32425.844: 99.2330% ( 5) 00:11:47.529 32425.844 - 32636.402: 99.2685% ( 5) 00:11:47.529 32636.402 - 32846.959: 99.3111% ( 6) 00:11:47.529 32846.959 - 33057.516: 99.3537% ( 6) 00:11:47.529 33057.516 - 33268.074: 99.3892% ( 5) 00:11:47.529 33268.074 - 33478.631: 99.4247% ( 5) 00:11:47.529 33478.631 - 33689.189: 99.4602% ( 5) 00:11:47.529 33689.189 - 33899.746: 99.4957% ( 5) 00:11:47.529 33899.746 - 34110.304: 99.5312% ( 5) 00:11:47.529 34110.304 - 34320.861: 99.5739% ( 6) 00:11:47.529 34320.861 - 34531.418: 99.6023% ( 4) 00:11:47.529 34531.418 - 34741.976: 99.6449% ( 6) 00:11:47.529 34741.976 - 34952.533: 99.6875% ( 6) 00:11:47.529 34952.533 - 35163.091: 99.7230% ( 5) 00:11:47.529 35163.091 - 35373.648: 99.7585% ( 5) 00:11:47.529 35373.648 - 35584.206: 99.7940% ( 5) 00:11:47.529 35584.206 - 35794.763: 99.8295% ( 5) 00:11:47.529 35794.763 - 36005.320: 99.8651% ( 5) 00:11:47.529 36005.320 - 36215.878: 99.9077% ( 6) 00:11:47.529 36215.878 - 36426.435: 99.9432% ( 5) 00:11:47.529 36426.435 - 36636.993: 99.9787% ( 5) 00:11:47.529 36636.993 - 36847.550: 100.0000% ( 3) 00:11:47.529 00:11:47.529 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:47.529 ============================================================================== 00:11:47.529 Range in us Cumulative IO count 00:11:47.529 6843.116 - 6895.756: 0.0142% ( 2) 00:11:47.529 6895.756 - 6948.395: 0.1136% ( 14) 00:11:47.529 6948.395 - 7001.035: 0.2060% ( 13) 00:11:47.529 7001.035 - 7053.674: 0.2912% ( 12) 00:11:47.529 7053.674 - 7106.313: 0.4119% ( 17) 00:11:47.529 7106.313 - 7158.953: 0.5682% ( 22) 00:11:47.529 7158.953 - 7211.592: 0.7599% ( 27) 00:11:47.529 7211.592 - 7264.231: 0.9233% ( 23) 00:11:47.529 7264.231 - 7316.871: 1.1151% ( 27) 00:11:47.529 7316.871 - 7369.510: 1.2855% ( 24) 00:11:47.529 7369.510 - 7422.149: 1.4702% ( 26) 00:11:47.529 7422.149 - 7474.789: 1.6548% ( 26) 00:11:47.529 7474.789 - 7527.428: 1.8253% ( 24) 00:11:47.529 7527.428 - 7580.067: 2.0099% ( 26) 00:11:47.529 7580.067 - 7632.707: 2.2443% ( 33) 00:11:47.529 7632.707 - 7685.346: 2.5000% ( 36) 00:11:47.529 7685.346 - 7737.986: 3.0256% ( 74) 00:11:47.529 7737.986 - 7790.625: 3.8281% ( 113) 00:11:47.529 7790.625 - 7843.264: 5.1278% ( 183) 00:11:47.529 7843.264 - 7895.904: 6.9034% ( 250) 00:11:47.529 7895.904 - 7948.543: 9.1974% ( 323) 00:11:47.529 7948.543 - 8001.182: 11.8111% ( 368) 00:11:47.529 8001.182 - 8053.822: 14.5099% ( 380) 00:11:47.529 8053.822 - 8106.461: 17.3438% ( 399) 00:11:47.529 8106.461 - 8159.100: 20.2486% ( 409) 00:11:47.529 8159.100 - 8211.740: 23.0611% ( 396) 00:11:47.529 8211.740 - 8264.379: 25.8878% ( 398) 00:11:47.529 8264.379 - 8317.018: 28.7145% ( 398) 00:11:47.529 8317.018 - 8369.658: 31.5199% ( 395) 00:11:47.529 8369.658 - 8422.297: 34.3608% ( 400) 00:11:47.529 8422.297 - 8474.937: 37.2017% ( 400) 00:11:47.529 8474.937 - 8527.576: 40.0071% ( 395) 00:11:47.529 8527.576 - 8580.215: 42.7344% ( 384) 00:11:47.529 8580.215 - 8632.855: 45.4759% ( 386) 00:11:47.529 8632.855 - 8685.494: 48.1818% ( 381) 00:11:47.529 8685.494 - 8738.133: 50.8665% ( 378) 00:11:47.529 8738.133 - 8790.773: 53.5795% ( 382) 00:11:47.529 8790.773 - 8843.412: 56.2642% ( 378) 00:11:47.529 8843.412 - 8896.051: 58.9915% ( 384) 00:11:47.529 8896.051 - 8948.691: 61.6406% ( 373) 00:11:47.529 8948.691 - 9001.330: 64.3395% ( 380) 00:11:47.529 9001.330 - 9053.969: 67.0810% ( 386) 00:11:47.529 9053.969 - 9106.609: 69.7514% ( 376) 00:11:47.529 9106.609 - 9159.248: 72.4716% ( 383) 00:11:47.529 9159.248 - 9211.888: 75.1634% ( 379) 00:11:47.529 9211.888 - 9264.527: 77.8196% ( 374) 00:11:47.529 9264.527 - 9317.166: 80.4403% ( 369) 00:11:47.529 9317.166 - 9369.806: 83.1250% ( 378) 00:11:47.529 9369.806 - 9422.445: 85.6605% ( 357) 00:11:47.529 9422.445 - 9475.084: 87.9901% ( 328) 00:11:47.529 9475.084 - 9527.724: 90.0284% ( 287) 00:11:47.529 9527.724 - 9580.363: 91.6903% ( 234) 00:11:47.529 9580.363 - 9633.002: 92.8906% ( 169) 00:11:47.529 9633.002 - 9685.642: 93.6293% ( 104) 00:11:47.529 9685.642 - 9738.281: 94.2330% ( 85) 00:11:47.529 9738.281 - 9790.920: 94.5952% ( 51) 00:11:47.529 9790.920 - 9843.560: 94.8224% ( 32) 00:11:47.529 9843.560 - 9896.199: 94.9645% ( 20) 00:11:47.529 9896.199 - 9948.839: 95.0923% ( 18) 00:11:47.529 9948.839 - 10001.478: 95.1918% ( 14) 00:11:47.529 10001.478 - 10054.117: 95.2912% ( 14) 00:11:47.529 10054.117 - 10106.757: 95.3906% ( 14) 00:11:47.529 10106.757 - 10159.396: 95.4616% ( 10) 00:11:47.529 10159.396 - 10212.035: 95.5256% ( 9) 00:11:47.529 10212.035 - 10264.675: 95.6037% ( 11) 00:11:47.529 10264.675 - 10317.314: 95.6960% ( 13) 00:11:47.529 10317.314 - 10369.953: 95.7812% ( 12) 00:11:47.529 10369.953 - 10422.593: 95.8594% ( 11) 00:11:47.529 10422.593 - 10475.232: 95.9375% ( 11) 00:11:47.529 10475.232 - 10527.871: 95.9872% ( 7) 00:11:47.529 10527.871 - 10580.511: 96.0440% ( 8) 00:11:47.529 10580.511 - 10633.150: 96.0938% ( 7) 00:11:47.529 10633.150 - 10685.790: 96.1506% ( 8) 00:11:47.529 10685.790 - 10738.429: 96.1861% ( 5) 00:11:47.529 10738.429 - 10791.068: 96.2287% ( 6) 00:11:47.529 10791.068 - 10843.708: 96.2784% ( 7) 00:11:47.529 10843.708 - 10896.347: 96.3210% ( 6) 00:11:47.529 10896.347 - 10948.986: 96.3636% ( 6) 00:11:47.529 10948.986 - 11001.626: 96.4062% ( 6) 00:11:47.529 11001.626 - 11054.265: 96.4489% ( 6) 00:11:47.529 11054.265 - 11106.904: 96.4915% ( 6) 00:11:47.529 11106.904 - 11159.544: 96.5412% ( 7) 00:11:47.529 11159.544 - 11212.183: 96.5838% ( 6) 00:11:47.530 11212.183 - 11264.822: 96.6264% ( 6) 00:11:47.530 11264.822 - 11317.462: 96.6690% ( 6) 00:11:47.530 11317.462 - 11370.101: 96.7116% ( 6) 00:11:47.530 11370.101 - 11422.741: 96.7614% ( 7) 00:11:47.530 11422.741 - 11475.380: 96.8040% ( 6) 00:11:47.530 11475.380 - 11528.019: 96.8537% ( 7) 00:11:47.530 11528.019 - 11580.659: 96.9034% ( 7) 00:11:47.530 11580.659 - 11633.298: 96.9460% ( 6) 00:11:47.530 11633.298 - 11685.937: 96.9886% ( 6) 00:11:47.530 11685.937 - 11738.577: 97.0384% ( 7) 00:11:47.530 11738.577 - 11791.216: 97.0881% ( 7) 00:11:47.530 11791.216 - 11843.855: 97.1307% ( 6) 00:11:47.530 11843.855 - 11896.495: 97.1733% ( 6) 00:11:47.530 11896.495 - 11949.134: 97.2301% ( 8) 00:11:47.530 11949.134 - 12001.773: 97.2727% ( 6) 00:11:47.530 12001.773 - 12054.413: 97.3153% ( 6) 00:11:47.530 12054.413 - 12107.052: 97.3651% ( 7) 00:11:47.530 12107.052 - 12159.692: 97.4077% ( 6) 00:11:47.530 12159.692 - 12212.331: 97.4574% ( 7) 00:11:47.530 12212.331 - 12264.970: 97.5000% ( 6) 00:11:47.530 12264.970 - 12317.610: 97.5426% ( 6) 00:11:47.530 12317.610 - 12370.249: 97.5923% ( 7) 00:11:47.530 12370.249 - 12422.888: 97.6420% ( 7) 00:11:47.530 12422.888 - 12475.528: 97.6776% ( 5) 00:11:47.530 12475.528 - 12528.167: 97.7273% ( 7) 00:11:47.530 12528.167 - 12580.806: 97.7699% ( 6) 00:11:47.530 12580.806 - 12633.446: 97.8196% ( 7) 00:11:47.530 12633.446 - 12686.085: 97.8622% ( 6) 00:11:47.530 12686.085 - 12738.724: 97.9048% ( 6) 00:11:47.530 12738.724 - 12791.364: 97.9403% ( 5) 00:11:47.530 12791.364 - 12844.003: 97.9759% ( 5) 00:11:47.530 12844.003 - 12896.643: 98.0043% ( 4) 00:11:47.530 12896.643 - 12949.282: 98.0327% ( 4) 00:11:47.530 12949.282 - 13001.921: 98.0469% ( 2) 00:11:47.530 13001.921 - 13054.561: 98.0540% ( 1) 00:11:47.530 13054.561 - 13107.200: 98.0682% ( 2) 00:11:47.530 13107.200 - 13159.839: 98.0824% ( 2) 00:11:47.530 13159.839 - 13212.479: 98.0966% ( 2) 00:11:47.530 13212.479 - 13265.118: 98.1108% ( 2) 00:11:47.530 13265.118 - 13317.757: 98.1250% ( 2) 00:11:47.530 13317.757 - 13370.397: 98.1321% ( 1) 00:11:47.530 13370.397 - 13423.036: 98.1463% ( 2) 00:11:47.530 13423.036 - 13475.676: 98.1605% ( 2) 00:11:47.530 13475.676 - 13580.954: 98.1818% ( 3) 00:11:47.530 13791.512 - 13896.790: 98.2102% ( 4) 00:11:47.530 13896.790 - 14002.069: 98.2386% ( 4) 00:11:47.530 14002.069 - 14107.348: 98.2670% ( 4) 00:11:47.530 14107.348 - 14212.627: 98.3026% ( 5) 00:11:47.530 14212.627 - 14317.905: 98.3310% ( 4) 00:11:47.530 14317.905 - 14423.184: 98.3594% ( 4) 00:11:47.530 14423.184 - 14528.463: 98.3878% ( 4) 00:11:47.530 14528.463 - 14633.741: 98.4162% ( 4) 00:11:47.530 14633.741 - 14739.020: 98.4446% ( 4) 00:11:47.530 14739.020 - 14844.299: 98.4730% ( 4) 00:11:47.530 14844.299 - 14949.578: 98.5014% ( 4) 00:11:47.530 14949.578 - 15054.856: 98.5369% ( 5) 00:11:47.530 15054.856 - 15160.135: 98.5582% ( 3) 00:11:47.530 15160.135 - 15265.414: 98.5866% ( 4) 00:11:47.530 15265.414 - 15370.692: 98.6222% ( 5) 00:11:47.530 15370.692 - 15475.971: 98.6506% ( 4) 00:11:47.530 15475.971 - 15581.250: 98.6790% ( 4) 00:11:47.530 15581.250 - 15686.529: 98.7074% ( 4) 00:11:47.530 15686.529 - 15791.807: 98.7358% ( 4) 00:11:47.530 15791.807 - 15897.086: 98.7642% ( 4) 00:11:47.530 15897.086 - 16002.365: 98.7997% ( 5) 00:11:47.530 16002.365 - 16107.643: 98.8281% ( 4) 00:11:47.530 16107.643 - 16212.922: 98.8565% ( 4) 00:11:47.530 16212.922 - 16318.201: 98.8849% ( 4) 00:11:47.530 16318.201 - 16423.480: 98.9205% ( 5) 00:11:47.530 16423.480 - 16528.758: 98.9489% ( 4) 00:11:47.530 16528.758 - 16634.037: 98.9702% ( 3) 00:11:47.530 16634.037 - 16739.316: 98.9986% ( 4) 00:11:47.530 16739.316 - 16844.594: 99.0270% ( 4) 00:11:47.530 16844.594 - 16949.873: 99.0625% ( 5) 00:11:47.530 16949.873 - 17055.152: 99.0909% ( 4) 00:11:47.530 30109.712 - 30320.270: 99.0980% ( 1) 00:11:47.530 30320.270 - 30530.827: 99.1335% ( 5) 00:11:47.530 30530.827 - 30741.385: 99.1761% ( 6) 00:11:47.530 30741.385 - 30951.942: 99.2116% ( 5) 00:11:47.530 30951.942 - 31162.500: 99.2472% ( 5) 00:11:47.530 31162.500 - 31373.057: 99.2827% ( 5) 00:11:47.530 31373.057 - 31583.614: 99.3182% ( 5) 00:11:47.530 31583.614 - 31794.172: 99.3537% ( 5) 00:11:47.530 31794.172 - 32004.729: 99.3963% ( 6) 00:11:47.530 32004.729 - 32215.287: 99.4318% ( 5) 00:11:47.530 32215.287 - 32425.844: 99.4673% ( 5) 00:11:47.530 32425.844 - 32636.402: 99.5028% ( 5) 00:11:47.530 32636.402 - 32846.959: 99.5384% ( 5) 00:11:47.530 32846.959 - 33057.516: 99.5739% ( 5) 00:11:47.530 33057.516 - 33268.074: 99.6094% ( 5) 00:11:47.530 33268.074 - 33478.631: 99.6449% ( 5) 00:11:47.530 33478.631 - 33689.189: 99.6875% ( 6) 00:11:47.530 33689.189 - 33899.746: 99.7230% ( 5) 00:11:47.530 33899.746 - 34110.304: 99.7656% ( 6) 00:11:47.530 34110.304 - 34320.861: 99.8011% ( 5) 00:11:47.530 34320.861 - 34531.418: 99.8366% ( 5) 00:11:47.530 34531.418 - 34741.976: 99.8793% ( 6) 00:11:47.530 34741.976 - 34952.533: 99.9219% ( 6) 00:11:47.530 34952.533 - 35163.091: 99.9574% ( 5) 00:11:47.530 35163.091 - 35373.648: 99.9929% ( 5) 00:11:47.530 35373.648 - 35584.206: 100.0000% ( 1) 00:11:47.530 00:11:47.530 23:19:39 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:48.904 Initializing NVMe Controllers 00:11:48.904 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:48.904 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:48.904 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:48.904 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:48.904 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:48.904 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:48.904 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:48.904 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:48.904 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:48.904 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:48.904 Initialization complete. Launching workers. 00:11:48.904 ======================================================== 00:11:48.904 Latency(us) 00:11:48.904 Device Information : IOPS MiB/s Average min max 00:11:48.904 PCIE (0000:00:06.0) NSID 1 from core 0: 15958.25 187.01 8016.16 4958.89 39798.09 00:11:48.904 PCIE (0000:00:07.0) NSID 1 from core 0: 15958.25 187.01 8008.47 5666.68 38938.68 00:11:48.904 PCIE (0000:00:09.0) NSID 1 from core 0: 15958.25 187.01 8001.61 5493.72 38407.53 00:11:48.904 PCIE (0000:00:08.0) NSID 1 from core 0: 15958.25 187.01 7994.69 5393.97 37154.85 00:11:48.904 PCIE (0000:00:08.0) NSID 2 from core 0: 15958.25 187.01 7988.14 5388.44 35628.93 00:11:48.904 PCIE (0000:00:08.0) NSID 3 from core 0: 16085.92 188.51 7918.29 5058.58 25506.63 00:11:48.904 ======================================================== 00:11:48.904 Total : 95877.19 1123.56 7987.80 4958.89 39798.09 00:11:48.904 00:11:48.904 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:48.904 ================================================================================= 00:11:48.904 1.00000% : 5632.411us 00:11:48.904 10.00000% : 6290.403us 00:11:48.904 25.00000% : 6711.518us 00:11:48.904 50.00000% : 7422.149us 00:11:48.904 75.00000% : 9001.330us 00:11:48.904 90.00000% : 9843.560us 00:11:48.904 95.00000% : 10264.675us 00:11:48.904 98.00000% : 11001.626us 00:11:48.904 99.00000% : 11896.495us 00:11:48.904 99.50000% : 37689.780us 00:11:48.904 99.90000% : 39374.239us 00:11:48.904 99.99000% : 39795.354us 00:11:48.904 99.99900% : 40005.912us 00:11:48.904 99.99990% : 40005.912us 00:11:48.904 99.99999% : 40005.912us 00:11:48.904 00:11:48.904 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:48.904 ================================================================================= 00:11:48.904 1.00000% : 5842.969us 00:11:48.904 10.00000% : 6264.084us 00:11:48.904 25.00000% : 6658.879us 00:11:48.904 50.00000% : 7422.149us 00:11:48.904 75.00000% : 9001.330us 00:11:48.904 90.00000% : 9790.920us 00:11:48.904 95.00000% : 10317.314us 00:11:48.904 98.00000% : 10843.708us 00:11:48.904 99.00000% : 11633.298us 00:11:48.904 99.50000% : 36847.550us 00:11:48.904 99.90000% : 38532.010us 00:11:48.904 99.99000% : 38953.124us 00:11:48.904 99.99900% : 38953.124us 00:11:48.904 99.99990% : 38953.124us 00:11:48.904 99.99999% : 38953.124us 00:11:48.904 00:11:48.904 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:48.904 ================================================================================= 00:11:48.904 1.00000% : 5895.608us 00:11:48.904 10.00000% : 6343.043us 00:11:48.904 25.00000% : 6685.198us 00:11:48.904 50.00000% : 7264.231us 00:11:48.904 75.00000% : 9001.330us 00:11:48.904 90.00000% : 9843.560us 00:11:48.904 95.00000% : 10212.035us 00:11:48.904 98.00000% : 11475.380us 00:11:48.904 99.00000% : 12370.249us 00:11:48.904 99.50000% : 36426.435us 00:11:48.904 99.90000% : 38110.895us 00:11:48.904 99.99000% : 38532.010us 00:11:48.904 99.99900% : 38532.010us 00:11:48.904 99.99990% : 38532.010us 00:11:48.904 99.99999% : 38532.010us 00:11:48.904 00:11:48.904 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:48.904 ================================================================================= 00:11:48.904 1.00000% : 5816.649us 00:11:48.904 10.00000% : 6237.764us 00:11:48.904 25.00000% : 6685.198us 00:11:48.904 50.00000% : 7422.149us 00:11:48.904 75.00000% : 9053.969us 00:11:48.904 90.00000% : 9738.281us 00:11:48.904 95.00000% : 10212.035us 00:11:48.904 98.00000% : 11370.101us 00:11:48.904 99.00000% : 12212.331us 00:11:48.904 99.50000% : 35163.091us 00:11:48.904 99.90000% : 36847.550us 00:11:48.904 99.99000% : 37268.665us 00:11:48.904 99.99900% : 37268.665us 00:11:48.904 99.99990% : 37268.665us 00:11:48.904 99.99999% : 37268.665us 00:11:48.904 00:11:48.904 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:48.904 ================================================================================= 00:11:48.904 1.00000% : 5842.969us 00:11:48.904 10.00000% : 6264.084us 00:11:48.904 25.00000% : 6737.838us 00:11:48.904 50.00000% : 7369.510us 00:11:48.904 75.00000% : 9053.969us 00:11:48.904 90.00000% : 9790.920us 00:11:48.904 95.00000% : 10264.675us 00:11:48.904 98.00000% : 10896.347us 00:11:48.904 99.00000% : 12001.773us 00:11:48.904 99.50000% : 33689.189us 00:11:48.904 99.90000% : 35373.648us 00:11:48.904 99.99000% : 35794.763us 00:11:48.905 99.99900% : 35794.763us 00:11:48.905 99.99990% : 35794.763us 00:11:48.905 99.99999% : 35794.763us 00:11:48.905 00:11:48.905 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:48.905 ================================================================================= 00:11:48.905 1.00000% : 5658.731us 00:11:48.905 10.00000% : 6237.764us 00:11:48.905 25.00000% : 6711.518us 00:11:48.905 50.00000% : 7422.149us 00:11:48.905 75.00000% : 9053.969us 00:11:48.905 90.00000% : 9790.920us 00:11:48.905 95.00000% : 10212.035us 00:11:48.905 98.00000% : 10896.347us 00:11:48.905 99.00000% : 11738.577us 00:11:48.905 99.50000% : 23477.153us 00:11:48.905 99.90000% : 25161.613us 00:11:48.905 99.99000% : 25582.728us 00:11:48.905 99.99900% : 25582.728us 00:11:48.905 99.99990% : 25582.728us 00:11:48.905 99.99999% : 25582.728us 00:11:48.905 00:11:48.905 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:48.905 ============================================================================== 00:11:48.905 Range in us Cumulative IO count 00:11:48.905 4948.100 - 4974.419: 0.0063% ( 1) 00:11:48.905 4974.419 - 5000.739: 0.0250% ( 3) 00:11:48.905 5000.739 - 5027.059: 0.0375% ( 2) 00:11:48.905 5027.059 - 5053.378: 0.0500% ( 2) 00:11:48.905 5053.378 - 5079.698: 0.0563% ( 1) 00:11:48.905 5079.698 - 5106.018: 0.0688% ( 2) 00:11:48.905 5106.018 - 5132.337: 0.0813% ( 2) 00:11:48.905 5132.337 - 5158.657: 0.0938% ( 2) 00:11:48.905 5158.657 - 5184.977: 0.1000% ( 1) 00:11:48.905 5184.977 - 5211.296: 0.1187% ( 3) 00:11:48.905 5211.296 - 5237.616: 0.1250% ( 1) 00:11:48.905 5237.616 - 5263.936: 0.1437% ( 3) 00:11:48.905 5263.936 - 5290.255: 0.1938% ( 8) 00:11:48.905 5290.255 - 5316.575: 0.2875% ( 15) 00:11:48.905 5316.575 - 5342.895: 0.3375% ( 8) 00:11:48.905 5342.895 - 5369.214: 0.3625% ( 4) 00:11:48.905 5369.214 - 5395.534: 0.3750% ( 2) 00:11:48.905 5395.534 - 5421.854: 0.3937% ( 3) 00:11:48.905 5421.854 - 5448.173: 0.4250% ( 5) 00:11:48.905 5448.173 - 5474.493: 0.4750% ( 8) 00:11:48.905 5474.493 - 5500.813: 0.5250% ( 8) 00:11:48.905 5500.813 - 5527.133: 0.6000% ( 12) 00:11:48.905 5527.133 - 5553.452: 0.6687% ( 11) 00:11:48.905 5553.452 - 5579.772: 0.7937% ( 20) 00:11:48.905 5579.772 - 5606.092: 0.9375% ( 23) 00:11:48.905 5606.092 - 5632.411: 1.0188% ( 13) 00:11:48.905 5632.411 - 5658.731: 1.0750% ( 9) 00:11:48.905 5658.731 - 5685.051: 1.2000% ( 20) 00:11:48.905 5685.051 - 5711.370: 1.3000% ( 16) 00:11:48.905 5711.370 - 5737.690: 1.4750% ( 28) 00:11:48.905 5737.690 - 5764.010: 1.8375% ( 58) 00:11:48.905 5764.010 - 5790.329: 2.1812% ( 55) 00:11:48.905 5790.329 - 5816.649: 2.5125% ( 53) 00:11:48.905 5816.649 - 5842.969: 2.8250% ( 50) 00:11:48.905 5842.969 - 5869.288: 3.1875% ( 58) 00:11:48.905 5869.288 - 5895.608: 3.4813% ( 47) 00:11:48.905 5895.608 - 5921.928: 4.0500% ( 91) 00:11:48.905 5921.928 - 5948.247: 4.4688% ( 67) 00:11:48.905 5948.247 - 5974.567: 4.7500% ( 45) 00:11:48.905 5974.567 - 6000.887: 5.0875% ( 54) 00:11:48.905 6000.887 - 6027.206: 5.4125% ( 52) 00:11:48.905 6027.206 - 6053.526: 5.7500% ( 54) 00:11:48.905 6053.526 - 6079.846: 6.2625% ( 82) 00:11:48.905 6079.846 - 6106.165: 6.6625% ( 64) 00:11:48.905 6106.165 - 6132.485: 7.2625% ( 96) 00:11:48.905 6132.485 - 6158.805: 7.7500% ( 78) 00:11:48.905 6158.805 - 6185.124: 8.2000% ( 72) 00:11:48.905 6185.124 - 6211.444: 8.6250% ( 68) 00:11:48.905 6211.444 - 6237.764: 9.2812% ( 105) 00:11:48.905 6237.764 - 6264.084: 9.9187% ( 102) 00:11:48.905 6264.084 - 6290.403: 10.5813% ( 106) 00:11:48.905 6290.403 - 6316.723: 11.2500% ( 107) 00:11:48.905 6316.723 - 6343.043: 12.0687% ( 131) 00:11:48.905 6343.043 - 6369.362: 13.0000% ( 149) 00:11:48.905 6369.362 - 6395.682: 13.8625% ( 138) 00:11:48.905 6395.682 - 6422.002: 14.6188% ( 121) 00:11:48.905 6422.002 - 6448.321: 15.4125% ( 127) 00:11:48.905 6448.321 - 6474.641: 16.3313% ( 147) 00:11:48.905 6474.641 - 6500.961: 17.2250% ( 143) 00:11:48.905 6500.961 - 6527.280: 18.1625% ( 150) 00:11:48.905 6527.280 - 6553.600: 18.9875% ( 132) 00:11:48.905 6553.600 - 6579.920: 20.2563% ( 203) 00:11:48.905 6579.920 - 6606.239: 21.3250% ( 171) 00:11:48.905 6606.239 - 6632.559: 22.3750% ( 168) 00:11:48.905 6632.559 - 6658.879: 23.5688% ( 191) 00:11:48.905 6658.879 - 6685.198: 24.9062% ( 214) 00:11:48.905 6685.198 - 6711.518: 26.3500% ( 231) 00:11:48.905 6711.518 - 6737.838: 27.9500% ( 256) 00:11:48.905 6737.838 - 6790.477: 30.5063% ( 409) 00:11:48.905 6790.477 - 6843.116: 32.3125% ( 289) 00:11:48.905 6843.116 - 6895.756: 33.8813% ( 251) 00:11:48.905 6895.756 - 6948.395: 35.5375% ( 265) 00:11:48.905 6948.395 - 7001.035: 36.9625% ( 228) 00:11:48.905 7001.035 - 7053.674: 38.7750% ( 290) 00:11:48.905 7053.674 - 7106.313: 40.3687% ( 255) 00:11:48.905 7106.313 - 7158.953: 41.8438% ( 236) 00:11:48.905 7158.953 - 7211.592: 43.3937% ( 248) 00:11:48.905 7211.592 - 7264.231: 45.1437% ( 280) 00:11:48.905 7264.231 - 7316.871: 46.8937% ( 280) 00:11:48.905 7316.871 - 7369.510: 48.8250% ( 309) 00:11:48.905 7369.510 - 7422.149: 50.4188% ( 255) 00:11:48.905 7422.149 - 7474.789: 51.8750% ( 233) 00:11:48.905 7474.789 - 7527.428: 53.8750% ( 320) 00:11:48.905 7527.428 - 7580.067: 55.6250% ( 280) 00:11:48.905 7580.067 - 7632.707: 56.9000% ( 204) 00:11:48.905 7632.707 - 7685.346: 57.6187% ( 115) 00:11:48.905 7685.346 - 7737.986: 58.1938% ( 92) 00:11:48.905 7737.986 - 7790.625: 58.9125% ( 115) 00:11:48.905 7790.625 - 7843.264: 59.6000% ( 110) 00:11:48.905 7843.264 - 7895.904: 60.2750% ( 108) 00:11:48.905 7895.904 - 7948.543: 60.8188% ( 87) 00:11:48.905 7948.543 - 8001.182: 61.3312% ( 82) 00:11:48.905 8001.182 - 8053.822: 61.8687% ( 86) 00:11:48.905 8053.822 - 8106.461: 62.3750% ( 81) 00:11:48.905 8106.461 - 8159.100: 62.9500% ( 92) 00:11:48.905 8159.100 - 8211.740: 63.4688% ( 83) 00:11:48.905 8211.740 - 8264.379: 64.1000% ( 101) 00:11:48.905 8264.379 - 8317.018: 64.6125% ( 82) 00:11:48.905 8317.018 - 8369.658: 65.1375% ( 84) 00:11:48.905 8369.658 - 8422.297: 65.7313% ( 95) 00:11:48.905 8422.297 - 8474.937: 66.5375% ( 129) 00:11:48.905 8474.937 - 8527.576: 67.3375% ( 128) 00:11:48.905 8527.576 - 8580.215: 67.9375% ( 96) 00:11:48.905 8580.215 - 8632.855: 68.8063% ( 139) 00:11:48.905 8632.855 - 8685.494: 69.6437% ( 134) 00:11:48.905 8685.494 - 8738.133: 70.5125% ( 139) 00:11:48.905 8738.133 - 8790.773: 71.4562% ( 151) 00:11:48.905 8790.773 - 8843.412: 72.5125% ( 169) 00:11:48.905 8843.412 - 8896.051: 73.6000% ( 174) 00:11:48.905 8896.051 - 8948.691: 74.7687% ( 187) 00:11:48.905 8948.691 - 9001.330: 75.8625% ( 175) 00:11:48.905 9001.330 - 9053.969: 76.7812% ( 147) 00:11:48.905 9053.969 - 9106.609: 77.7062% ( 148) 00:11:48.905 9106.609 - 9159.248: 79.0125% ( 209) 00:11:48.905 9159.248 - 9211.888: 80.4875% ( 236) 00:11:48.905 9211.888 - 9264.527: 81.8688% ( 221) 00:11:48.905 9264.527 - 9317.166: 83.0375% ( 187) 00:11:48.905 9317.166 - 9369.806: 84.1312% ( 175) 00:11:48.905 9369.806 - 9422.445: 85.1250% ( 159) 00:11:48.905 9422.445 - 9475.084: 85.9125% ( 126) 00:11:48.905 9475.084 - 9527.724: 86.5625% ( 104) 00:11:48.905 9527.724 - 9580.363: 87.3250% ( 122) 00:11:48.905 9580.363 - 9633.002: 88.0750% ( 120) 00:11:48.905 9633.002 - 9685.642: 88.6625% ( 94) 00:11:48.905 9685.642 - 9738.281: 89.2375% ( 92) 00:11:48.905 9738.281 - 9790.920: 89.7562% ( 83) 00:11:48.905 9790.920 - 9843.560: 90.3125% ( 89) 00:11:48.905 9843.560 - 9896.199: 90.9062% ( 95) 00:11:48.905 9896.199 - 9948.839: 91.5687% ( 106) 00:11:48.905 9948.839 - 10001.478: 92.1250% ( 89) 00:11:48.905 10001.478 - 10054.117: 92.7375% ( 98) 00:11:48.905 10054.117 - 10106.757: 93.4625% ( 116) 00:11:48.905 10106.757 - 10159.396: 94.1813% ( 115) 00:11:48.905 10159.396 - 10212.035: 94.6937% ( 82) 00:11:48.905 10212.035 - 10264.675: 95.1063% ( 66) 00:11:48.905 10264.675 - 10317.314: 95.6000% ( 79) 00:11:48.905 10317.314 - 10369.953: 95.9625% ( 58) 00:11:48.905 10369.953 - 10422.593: 96.2562% ( 47) 00:11:48.905 10422.593 - 10475.232: 96.5062% ( 40) 00:11:48.905 10475.232 - 10527.871: 96.8063% ( 48) 00:11:48.905 10527.871 - 10580.511: 97.0938% ( 46) 00:11:48.905 10580.511 - 10633.150: 97.3563% ( 42) 00:11:48.905 10633.150 - 10685.790: 97.5438% ( 30) 00:11:48.905 10685.790 - 10738.429: 97.5938% ( 8) 00:11:48.905 10738.429 - 10791.068: 97.6875% ( 15) 00:11:48.905 10791.068 - 10843.708: 97.7750% ( 14) 00:11:48.905 10843.708 - 10896.347: 97.8688% ( 15) 00:11:48.905 10896.347 - 10948.986: 97.9562% ( 14) 00:11:48.905 10948.986 - 11001.626: 98.0563% ( 16) 00:11:48.905 11001.626 - 11054.265: 98.1750% ( 19) 00:11:48.905 11054.265 - 11106.904: 98.2812% ( 17) 00:11:48.905 11106.904 - 11159.544: 98.3563% ( 12) 00:11:48.905 11159.544 - 11212.183: 98.4250% ( 11) 00:11:48.905 11212.183 - 11264.822: 98.4875% ( 10) 00:11:48.905 11264.822 - 11317.462: 98.5250% ( 6) 00:11:48.905 11317.462 - 11370.101: 98.5750% ( 8) 00:11:48.905 11370.101 - 11422.741: 98.6188% ( 7) 00:11:48.905 11422.741 - 11475.380: 98.6562% ( 6) 00:11:48.906 11475.380 - 11528.019: 98.7250% ( 11) 00:11:48.906 11528.019 - 11580.659: 98.7562% ( 5) 00:11:48.906 11580.659 - 11633.298: 98.7938% ( 6) 00:11:48.906 11633.298 - 11685.937: 98.8500% ( 9) 00:11:48.906 11685.937 - 11738.577: 98.8812% ( 5) 00:11:48.906 11738.577 - 11791.216: 98.9188% ( 6) 00:11:48.906 11791.216 - 11843.855: 98.9625% ( 7) 00:11:48.906 11843.855 - 11896.495: 99.0000% ( 6) 00:11:48.906 11896.495 - 11949.134: 99.0500% ( 8) 00:11:48.906 11949.134 - 12001.773: 99.0875% ( 6) 00:11:48.906 12001.773 - 12054.413: 99.1063% ( 3) 00:11:48.906 12054.413 - 12107.052: 99.1375% ( 5) 00:11:48.906 12107.052 - 12159.692: 99.1562% ( 3) 00:11:48.906 12159.692 - 12212.331: 99.1750% ( 3) 00:11:48.906 12212.331 - 12264.970: 99.2000% ( 4) 00:11:48.906 36005.320 - 36215.878: 99.2250% ( 4) 00:11:48.906 36215.878 - 36426.435: 99.2687% ( 7) 00:11:48.906 36426.435 - 36636.993: 99.3125% ( 7) 00:11:48.906 36636.993 - 36847.550: 99.3625% ( 8) 00:11:48.906 36847.550 - 37058.108: 99.4062% ( 7) 00:11:48.906 37058.108 - 37268.665: 99.4437% ( 6) 00:11:48.906 37268.665 - 37479.222: 99.4938% ( 8) 00:11:48.906 37479.222 - 37689.780: 99.5438% ( 8) 00:11:48.906 37689.780 - 37900.337: 99.5875% ( 7) 00:11:48.906 37900.337 - 38110.895: 99.6312% ( 7) 00:11:48.906 38110.895 - 38321.452: 99.6750% ( 7) 00:11:48.906 38321.452 - 38532.010: 99.7188% ( 7) 00:11:48.906 38532.010 - 38742.567: 99.7687% ( 8) 00:11:48.906 38742.567 - 38953.124: 99.8187% ( 8) 00:11:48.906 38953.124 - 39163.682: 99.8563% ( 6) 00:11:48.906 39163.682 - 39374.239: 99.9062% ( 8) 00:11:48.906 39374.239 - 39584.797: 99.9562% ( 8) 00:11:48.906 39584.797 - 39795.354: 99.9938% ( 6) 00:11:48.906 39795.354 - 40005.912: 100.0000% ( 1) 00:11:48.906 00:11:48.906 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:48.906 ============================================================================== 00:11:48.906 Range in us Cumulative IO count 00:11:48.906 5658.731 - 5685.051: 0.0187% ( 3) 00:11:48.906 5685.051 - 5711.370: 0.0813% ( 10) 00:11:48.906 5711.370 - 5737.690: 0.1313% ( 8) 00:11:48.906 5737.690 - 5764.010: 0.2188% ( 14) 00:11:48.906 5764.010 - 5790.329: 0.3812% ( 26) 00:11:48.906 5790.329 - 5816.649: 0.6937% ( 50) 00:11:48.906 5816.649 - 5842.969: 1.0000% ( 49) 00:11:48.906 5842.969 - 5869.288: 1.5437% ( 87) 00:11:48.906 5869.288 - 5895.608: 1.6812% ( 22) 00:11:48.906 5895.608 - 5921.928: 1.7812% ( 16) 00:11:48.906 5921.928 - 5948.247: 1.9312% ( 24) 00:11:48.906 5948.247 - 5974.567: 2.1250% ( 31) 00:11:48.906 5974.567 - 6000.887: 2.6125% ( 78) 00:11:48.906 6000.887 - 6027.206: 3.0063% ( 63) 00:11:48.906 6027.206 - 6053.526: 3.6562% ( 104) 00:11:48.906 6053.526 - 6079.846: 4.0375% ( 61) 00:11:48.906 6079.846 - 6106.165: 4.6250% ( 94) 00:11:48.906 6106.165 - 6132.485: 5.2375% ( 98) 00:11:48.906 6132.485 - 6158.805: 6.1875% ( 152) 00:11:48.906 6158.805 - 6185.124: 6.8375% ( 104) 00:11:48.906 6185.124 - 6211.444: 8.1750% ( 214) 00:11:48.906 6211.444 - 6237.764: 9.6562% ( 237) 00:11:48.906 6237.764 - 6264.084: 10.8000% ( 183) 00:11:48.906 6264.084 - 6290.403: 11.7750% ( 156) 00:11:48.906 6290.403 - 6316.723: 12.7750% ( 160) 00:11:48.906 6316.723 - 6343.043: 13.5000% ( 116) 00:11:48.906 6343.043 - 6369.362: 14.2188% ( 115) 00:11:48.906 6369.362 - 6395.682: 15.1750% ( 153) 00:11:48.906 6395.682 - 6422.002: 16.2437% ( 171) 00:11:48.906 6422.002 - 6448.321: 17.6875% ( 231) 00:11:48.906 6448.321 - 6474.641: 18.6187% ( 149) 00:11:48.906 6474.641 - 6500.961: 19.4000% ( 125) 00:11:48.906 6500.961 - 6527.280: 20.3000% ( 144) 00:11:48.906 6527.280 - 6553.600: 21.3500% ( 168) 00:11:48.906 6553.600 - 6579.920: 22.4062% ( 169) 00:11:48.906 6579.920 - 6606.239: 23.2688% ( 138) 00:11:48.906 6606.239 - 6632.559: 24.4625% ( 191) 00:11:48.906 6632.559 - 6658.879: 25.0063% ( 87) 00:11:48.906 6658.879 - 6685.198: 26.1125% ( 177) 00:11:48.906 6685.198 - 6711.518: 26.6500% ( 86) 00:11:48.906 6711.518 - 6737.838: 27.4688% ( 131) 00:11:48.906 6737.838 - 6790.477: 28.4375% ( 155) 00:11:48.906 6790.477 - 6843.116: 30.3875% ( 312) 00:11:48.906 6843.116 - 6895.756: 31.7188% ( 213) 00:11:48.906 6895.756 - 6948.395: 33.0688% ( 216) 00:11:48.906 6948.395 - 7001.035: 35.6875% ( 419) 00:11:48.906 7001.035 - 7053.674: 38.6625% ( 476) 00:11:48.906 7053.674 - 7106.313: 40.8500% ( 350) 00:11:48.906 7106.313 - 7158.953: 43.0063% ( 345) 00:11:48.906 7158.953 - 7211.592: 44.4188% ( 226) 00:11:48.906 7211.592 - 7264.231: 45.8312% ( 226) 00:11:48.906 7264.231 - 7316.871: 47.6375% ( 289) 00:11:48.906 7316.871 - 7369.510: 49.6437% ( 321) 00:11:48.906 7369.510 - 7422.149: 52.7375% ( 495) 00:11:48.906 7422.149 - 7474.789: 53.9188% ( 189) 00:11:48.906 7474.789 - 7527.428: 55.2250% ( 209) 00:11:48.906 7527.428 - 7580.067: 56.7938% ( 251) 00:11:48.906 7580.067 - 7632.707: 57.4375% ( 103) 00:11:48.906 7632.707 - 7685.346: 58.0500% ( 98) 00:11:48.906 7685.346 - 7737.986: 58.5312% ( 77) 00:11:48.906 7737.986 - 7790.625: 58.9813% ( 72) 00:11:48.906 7790.625 - 7843.264: 59.5688% ( 94) 00:11:48.906 7843.264 - 7895.904: 60.9750% ( 225) 00:11:48.906 7895.904 - 7948.543: 61.3625% ( 62) 00:11:48.906 7948.543 - 8001.182: 61.8312% ( 75) 00:11:48.906 8001.182 - 8053.822: 62.1625% ( 53) 00:11:48.906 8053.822 - 8106.461: 62.7625% ( 96) 00:11:48.906 8106.461 - 8159.100: 63.1250% ( 58) 00:11:48.906 8159.100 - 8211.740: 64.0438% ( 147) 00:11:48.906 8211.740 - 8264.379: 64.4313% ( 62) 00:11:48.906 8264.379 - 8317.018: 65.0000% ( 91) 00:11:48.906 8317.018 - 8369.658: 65.5938% ( 95) 00:11:48.906 8369.658 - 8422.297: 66.4000% ( 129) 00:11:48.906 8422.297 - 8474.937: 66.8750% ( 76) 00:11:48.906 8474.937 - 8527.576: 67.4437% ( 91) 00:11:48.906 8527.576 - 8580.215: 67.7562% ( 50) 00:11:48.906 8580.215 - 8632.855: 68.1000% ( 55) 00:11:48.906 8632.855 - 8685.494: 68.5875% ( 78) 00:11:48.906 8685.494 - 8738.133: 69.4250% ( 134) 00:11:48.906 8738.133 - 8790.773: 71.0750% ( 264) 00:11:48.906 8790.773 - 8843.412: 71.7250% ( 104) 00:11:48.906 8843.412 - 8896.051: 72.9062% ( 189) 00:11:48.906 8896.051 - 8948.691: 74.0375% ( 181) 00:11:48.906 8948.691 - 9001.330: 75.7562% ( 275) 00:11:48.906 9001.330 - 9053.969: 77.6125% ( 297) 00:11:48.906 9053.969 - 9106.609: 79.0563% ( 231) 00:11:48.906 9106.609 - 9159.248: 81.0875% ( 325) 00:11:48.906 9159.248 - 9211.888: 82.0312% ( 151) 00:11:48.906 9211.888 - 9264.527: 82.7062% ( 108) 00:11:48.906 9264.527 - 9317.166: 83.2500% ( 87) 00:11:48.906 9317.166 - 9369.806: 83.7125% ( 74) 00:11:48.906 9369.806 - 9422.445: 84.4000% ( 110) 00:11:48.906 9422.445 - 9475.084: 85.1375% ( 118) 00:11:48.906 9475.084 - 9527.724: 85.7812% ( 103) 00:11:48.906 9527.724 - 9580.363: 86.5500% ( 123) 00:11:48.906 9580.363 - 9633.002: 87.4437% ( 143) 00:11:48.906 9633.002 - 9685.642: 88.6188% ( 188) 00:11:48.906 9685.642 - 9738.281: 89.4688% ( 136) 00:11:48.906 9738.281 - 9790.920: 90.0312% ( 90) 00:11:48.906 9790.920 - 9843.560: 90.6063% ( 92) 00:11:48.906 9843.560 - 9896.199: 91.1250% ( 83) 00:11:48.906 9896.199 - 9948.839: 91.6750% ( 88) 00:11:48.906 9948.839 - 10001.478: 92.2125% ( 86) 00:11:48.906 10001.478 - 10054.117: 92.7313% ( 83) 00:11:48.906 10054.117 - 10106.757: 93.5812% ( 136) 00:11:48.906 10106.757 - 10159.396: 93.9250% ( 55) 00:11:48.906 10159.396 - 10212.035: 94.2375% ( 50) 00:11:48.906 10212.035 - 10264.675: 94.6375% ( 64) 00:11:48.906 10264.675 - 10317.314: 95.3000% ( 106) 00:11:48.906 10317.314 - 10369.953: 96.3688% ( 171) 00:11:48.906 10369.953 - 10422.593: 96.6625% ( 47) 00:11:48.906 10422.593 - 10475.232: 96.9000% ( 38) 00:11:48.906 10475.232 - 10527.871: 97.1375% ( 38) 00:11:48.906 10527.871 - 10580.511: 97.3187% ( 29) 00:11:48.906 10580.511 - 10633.150: 97.4750% ( 25) 00:11:48.906 10633.150 - 10685.790: 97.6000% ( 20) 00:11:48.906 10685.790 - 10738.429: 97.7625% ( 26) 00:11:48.907 10738.429 - 10791.068: 97.9500% ( 30) 00:11:48.907 10791.068 - 10843.708: 98.0750% ( 20) 00:11:48.907 10843.708 - 10896.347: 98.1688% ( 15) 00:11:48.907 10896.347 - 10948.986: 98.2625% ( 15) 00:11:48.907 10948.986 - 11001.626: 98.3625% ( 16) 00:11:48.907 11001.626 - 11054.265: 98.4562% ( 15) 00:11:48.907 11054.265 - 11106.904: 98.5187% ( 10) 00:11:48.907 11106.904 - 11159.544: 98.6063% ( 14) 00:11:48.907 11159.544 - 11212.183: 98.7188% ( 18) 00:11:48.907 11212.183 - 11264.822: 98.7500% ( 5) 00:11:48.907 11264.822 - 11317.462: 98.7938% ( 7) 00:11:48.907 11317.462 - 11370.101: 98.8312% ( 6) 00:11:48.907 11370.101 - 11422.741: 98.8625% ( 5) 00:11:48.907 11422.741 - 11475.380: 98.9000% ( 6) 00:11:48.907 11475.380 - 11528.019: 98.9437% ( 7) 00:11:48.907 11528.019 - 11580.659: 98.9875% ( 7) 00:11:48.907 11580.659 - 11633.298: 99.0312% ( 7) 00:11:48.907 11633.298 - 11685.937: 99.0687% ( 6) 00:11:48.907 11685.937 - 11738.577: 99.1125% ( 7) 00:11:48.907 11738.577 - 11791.216: 99.1562% ( 7) 00:11:48.907 11791.216 - 11843.855: 99.1813% ( 4) 00:11:48.907 11843.855 - 11896.495: 99.2000% ( 3) 00:11:48.907 35373.648 - 35584.206: 99.2125% ( 2) 00:11:48.907 35584.206 - 35794.763: 99.2625% ( 8) 00:11:48.907 35794.763 - 36005.320: 99.3125% ( 8) 00:11:48.907 36005.320 - 36215.878: 99.3625% ( 8) 00:11:48.907 36215.878 - 36426.435: 99.4125% ( 8) 00:11:48.907 36426.435 - 36636.993: 99.4625% ( 8) 00:11:48.907 36636.993 - 36847.550: 99.5187% ( 9) 00:11:48.907 36847.550 - 37058.108: 99.5687% ( 8) 00:11:48.907 37058.108 - 37268.665: 99.6188% ( 8) 00:11:48.907 37268.665 - 37479.222: 99.6562% ( 6) 00:11:48.907 37479.222 - 37689.780: 99.7062% ( 8) 00:11:48.907 37689.780 - 37900.337: 99.7562% ( 8) 00:11:48.907 37900.337 - 38110.895: 99.8063% ( 8) 00:11:48.907 38110.895 - 38321.452: 99.8500% ( 7) 00:11:48.907 38321.452 - 38532.010: 99.9000% ( 8) 00:11:48.907 38532.010 - 38742.567: 99.9500% ( 8) 00:11:48.907 38742.567 - 38953.124: 100.0000% ( 8) 00:11:48.907 00:11:48.907 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:48.907 ============================================================================== 00:11:48.907 Range in us Cumulative IO count 00:11:48.907 5474.493 - 5500.813: 0.0063% ( 1) 00:11:48.907 5527.133 - 5553.452: 0.1688% ( 26) 00:11:48.907 5553.452 - 5579.772: 0.2062% ( 6) 00:11:48.907 5606.092 - 5632.411: 0.2125% ( 1) 00:11:48.907 5632.411 - 5658.731: 0.2188% ( 1) 00:11:48.907 5658.731 - 5685.051: 0.2250% ( 1) 00:11:48.907 5685.051 - 5711.370: 0.3000% ( 12) 00:11:48.907 5711.370 - 5737.690: 0.3812% ( 13) 00:11:48.907 5737.690 - 5764.010: 0.6750% ( 47) 00:11:48.907 5764.010 - 5790.329: 0.8250% ( 24) 00:11:48.907 5790.329 - 5816.649: 0.8438% ( 3) 00:11:48.907 5816.649 - 5842.969: 0.8812% ( 6) 00:11:48.907 5842.969 - 5869.288: 0.9313% ( 8) 00:11:48.907 5869.288 - 5895.608: 1.0250% ( 15) 00:11:48.907 5895.608 - 5921.928: 1.3687% ( 55) 00:11:48.907 5921.928 - 5948.247: 1.4688% ( 16) 00:11:48.907 5948.247 - 5974.567: 1.5813% ( 18) 00:11:48.907 5974.567 - 6000.887: 1.6875% ( 17) 00:11:48.907 6000.887 - 6027.206: 1.8813% ( 31) 00:11:48.907 6027.206 - 6053.526: 2.1125% ( 37) 00:11:48.907 6053.526 - 6079.846: 3.0500% ( 150) 00:11:48.907 6079.846 - 6106.165: 3.3375% ( 46) 00:11:48.907 6106.165 - 6132.485: 3.6063% ( 43) 00:11:48.907 6132.485 - 6158.805: 3.9438% ( 54) 00:11:48.907 6158.805 - 6185.124: 4.7812% ( 134) 00:11:48.907 6185.124 - 6211.444: 5.6375% ( 137) 00:11:48.907 6211.444 - 6237.764: 7.2938% ( 265) 00:11:48.907 6237.764 - 6264.084: 8.0563% ( 122) 00:11:48.907 6264.084 - 6290.403: 9.2750% ( 195) 00:11:48.907 6290.403 - 6316.723: 9.9062% ( 101) 00:11:48.907 6316.723 - 6343.043: 10.4500% ( 87) 00:11:48.907 6343.043 - 6369.362: 11.3812% ( 149) 00:11:48.907 6369.362 - 6395.682: 12.2063% ( 132) 00:11:48.907 6395.682 - 6422.002: 14.0500% ( 295) 00:11:48.907 6422.002 - 6448.321: 15.6375% ( 254) 00:11:48.907 6448.321 - 6474.641: 16.9312% ( 207) 00:11:48.907 6474.641 - 6500.961: 17.7250% ( 127) 00:11:48.907 6500.961 - 6527.280: 19.1438% ( 227) 00:11:48.907 6527.280 - 6553.600: 20.3562% ( 194) 00:11:48.907 6553.600 - 6579.920: 21.5375% ( 189) 00:11:48.907 6579.920 - 6606.239: 22.2688% ( 117) 00:11:48.907 6606.239 - 6632.559: 23.4750% ( 193) 00:11:48.907 6632.559 - 6658.879: 24.5000% ( 164) 00:11:48.907 6658.879 - 6685.198: 25.2312% ( 117) 00:11:48.907 6685.198 - 6711.518: 25.8000% ( 91) 00:11:48.907 6711.518 - 6737.838: 26.5875% ( 126) 00:11:48.907 6737.838 - 6790.477: 27.9875% ( 224) 00:11:48.907 6790.477 - 6843.116: 30.0125% ( 324) 00:11:48.907 6843.116 - 6895.756: 31.8250% ( 290) 00:11:48.907 6895.756 - 6948.395: 35.4188% ( 575) 00:11:48.907 6948.395 - 7001.035: 38.4250% ( 481) 00:11:48.907 7001.035 - 7053.674: 41.1500% ( 436) 00:11:48.907 7053.674 - 7106.313: 43.6125% ( 394) 00:11:48.907 7106.313 - 7158.953: 47.2188% ( 577) 00:11:48.907 7158.953 - 7211.592: 49.2000% ( 317) 00:11:48.907 7211.592 - 7264.231: 51.1187% ( 307) 00:11:48.907 7264.231 - 7316.871: 52.5312% ( 226) 00:11:48.907 7316.871 - 7369.510: 53.9250% ( 223) 00:11:48.907 7369.510 - 7422.149: 55.2000% ( 204) 00:11:48.907 7422.149 - 7474.789: 56.2313% ( 165) 00:11:48.907 7474.789 - 7527.428: 57.1938% ( 154) 00:11:48.907 7527.428 - 7580.067: 58.3563% ( 186) 00:11:48.907 7580.067 - 7632.707: 59.1875% ( 133) 00:11:48.907 7632.707 - 7685.346: 60.1250% ( 150) 00:11:48.907 7685.346 - 7737.986: 60.9438% ( 131) 00:11:48.907 7737.986 - 7790.625: 61.4062% ( 74) 00:11:48.907 7790.625 - 7843.264: 61.8188% ( 66) 00:11:48.907 7843.264 - 7895.904: 62.5875% ( 123) 00:11:48.907 7895.904 - 7948.543: 63.1500% ( 90) 00:11:48.907 7948.543 - 8001.182: 63.4000% ( 40) 00:11:48.907 8001.182 - 8053.822: 63.6437% ( 39) 00:11:48.907 8053.822 - 8106.461: 63.8750% ( 37) 00:11:48.907 8106.461 - 8159.100: 64.1500% ( 44) 00:11:48.907 8159.100 - 8211.740: 64.4500% ( 48) 00:11:48.907 8211.740 - 8264.379: 64.7125% ( 42) 00:11:48.907 8264.379 - 8317.018: 65.0438% ( 53) 00:11:48.907 8317.018 - 8369.658: 65.6063% ( 90) 00:11:48.907 8369.658 - 8422.297: 65.9375% ( 53) 00:11:48.907 8422.297 - 8474.937: 66.5438% ( 97) 00:11:48.907 8474.937 - 8527.576: 67.2562% ( 114) 00:11:48.907 8527.576 - 8580.215: 67.7750% ( 83) 00:11:48.907 8580.215 - 8632.855: 68.3625% ( 94) 00:11:48.907 8632.855 - 8685.494: 69.0125% ( 104) 00:11:48.907 8685.494 - 8738.133: 69.9250% ( 146) 00:11:48.907 8738.133 - 8790.773: 71.1437% ( 195) 00:11:48.907 8790.773 - 8843.412: 72.1125% ( 155) 00:11:48.907 8843.412 - 8896.051: 73.4000% ( 206) 00:11:48.907 8896.051 - 8948.691: 74.5250% ( 180) 00:11:48.907 8948.691 - 9001.330: 75.7125% ( 190) 00:11:48.907 9001.330 - 9053.969: 76.7250% ( 162) 00:11:48.907 9053.969 - 9106.609: 77.8000% ( 172) 00:11:48.907 9106.609 - 9159.248: 78.9688% ( 187) 00:11:48.907 9159.248 - 9211.888: 80.0500% ( 173) 00:11:48.907 9211.888 - 9264.527: 81.2875% ( 198) 00:11:48.907 9264.527 - 9317.166: 82.3125% ( 164) 00:11:48.907 9317.166 - 9369.806: 83.1063% ( 127) 00:11:48.907 9369.806 - 9422.445: 83.9188% ( 130) 00:11:48.907 9422.445 - 9475.084: 84.8563% ( 150) 00:11:48.907 9475.084 - 9527.724: 85.7000% ( 135) 00:11:48.907 9527.724 - 9580.363: 86.4562% ( 121) 00:11:48.907 9580.363 - 9633.002: 87.1875% ( 117) 00:11:48.907 9633.002 - 9685.642: 88.3187% ( 181) 00:11:48.907 9685.642 - 9738.281: 89.0375% ( 115) 00:11:48.907 9738.281 - 9790.920: 89.8625% ( 132) 00:11:48.907 9790.920 - 9843.560: 90.5125% ( 104) 00:11:48.907 9843.560 - 9896.199: 91.1125% ( 96) 00:11:48.907 9896.199 - 9948.839: 91.8438% ( 117) 00:11:48.907 9948.839 - 10001.478: 92.4562% ( 98) 00:11:48.907 10001.478 - 10054.117: 92.9437% ( 78) 00:11:48.907 10054.117 - 10106.757: 93.5375% ( 95) 00:11:48.907 10106.757 - 10159.396: 94.0062% ( 75) 00:11:48.907 10159.396 - 10212.035: 95.0250% ( 163) 00:11:48.907 10212.035 - 10264.675: 95.5625% ( 86) 00:11:48.907 10264.675 - 10317.314: 95.7938% ( 37) 00:11:48.907 10317.314 - 10369.953: 95.9625% ( 27) 00:11:48.907 10369.953 - 10422.593: 96.1437% ( 29) 00:11:48.907 10422.593 - 10475.232: 96.2875% ( 23) 00:11:48.907 10475.232 - 10527.871: 96.4437% ( 25) 00:11:48.907 10527.871 - 10580.511: 96.5750% ( 21) 00:11:48.907 10580.511 - 10633.150: 96.7062% ( 21) 00:11:48.907 10633.150 - 10685.790: 96.8688% ( 26) 00:11:48.907 10685.790 - 10738.429: 97.0563% ( 30) 00:11:48.907 10738.429 - 10791.068: 97.2562% ( 32) 00:11:48.907 10791.068 - 10843.708: 97.3688% ( 18) 00:11:48.907 10843.708 - 10896.347: 97.4062% ( 6) 00:11:48.907 10896.347 - 10948.986: 97.4500% ( 7) 00:11:48.907 10948.986 - 11001.626: 97.4938% ( 7) 00:11:48.907 11001.626 - 11054.265: 97.5375% ( 7) 00:11:48.907 11054.265 - 11106.904: 97.5750% ( 6) 00:11:48.908 11106.904 - 11159.544: 97.6188% ( 7) 00:11:48.908 11159.544 - 11212.183: 97.6625% ( 7) 00:11:48.908 11212.183 - 11264.822: 97.7250% ( 10) 00:11:48.908 11264.822 - 11317.462: 97.8000% ( 12) 00:11:48.908 11317.462 - 11370.101: 97.9000% ( 16) 00:11:48.908 11370.101 - 11422.741: 97.9938% ( 15) 00:11:48.908 11422.741 - 11475.380: 98.1000% ( 17) 00:11:48.908 11475.380 - 11528.019: 98.2000% ( 16) 00:11:48.908 11528.019 - 11580.659: 98.3187% ( 19) 00:11:48.908 11580.659 - 11633.298: 98.4250% ( 17) 00:11:48.908 11633.298 - 11685.937: 98.5187% ( 15) 00:11:48.908 11685.937 - 11738.577: 98.6000% ( 13) 00:11:48.908 11738.577 - 11791.216: 98.6750% ( 12) 00:11:48.908 11791.216 - 11843.855: 98.7438% ( 11) 00:11:48.908 11843.855 - 11896.495: 98.8125% ( 11) 00:11:48.908 11896.495 - 11949.134: 98.8563% ( 7) 00:11:48.908 11949.134 - 12001.773: 98.8875% ( 5) 00:11:48.908 12001.773 - 12054.413: 98.9062% ( 3) 00:11:48.908 12054.413 - 12107.052: 98.9250% ( 3) 00:11:48.908 12107.052 - 12159.692: 98.9313% ( 1) 00:11:48.908 12159.692 - 12212.331: 98.9500% ( 3) 00:11:48.908 12212.331 - 12264.970: 98.9688% ( 3) 00:11:48.908 12264.970 - 12317.610: 98.9875% ( 3) 00:11:48.908 12317.610 - 12370.249: 99.0125% ( 4) 00:11:48.908 12370.249 - 12422.888: 99.0312% ( 3) 00:11:48.908 12422.888 - 12475.528: 99.0563% ( 4) 00:11:48.908 12475.528 - 12528.167: 99.0812% ( 4) 00:11:48.908 12528.167 - 12580.806: 99.1063% ( 4) 00:11:48.908 12580.806 - 12633.446: 99.1250% ( 3) 00:11:48.908 12633.446 - 12686.085: 99.1500% ( 4) 00:11:48.908 12686.085 - 12738.724: 99.1750% ( 4) 00:11:48.908 12738.724 - 12791.364: 99.2000% ( 4) 00:11:48.908 34952.533 - 35163.091: 99.2375% ( 6) 00:11:48.908 35163.091 - 35373.648: 99.2812% ( 7) 00:11:48.908 35373.648 - 35584.206: 99.3375% ( 9) 00:11:48.908 35584.206 - 35794.763: 99.3875% ( 8) 00:11:48.908 35794.763 - 36005.320: 99.4375% ( 8) 00:11:48.908 36005.320 - 36215.878: 99.4875% ( 8) 00:11:48.908 36215.878 - 36426.435: 99.5312% ( 7) 00:11:48.908 36426.435 - 36636.993: 99.5812% ( 8) 00:11:48.908 36636.993 - 36847.550: 99.6312% ( 8) 00:11:48.908 36847.550 - 37058.108: 99.6813% ( 8) 00:11:48.908 37058.108 - 37268.665: 99.7313% ( 8) 00:11:48.908 37268.665 - 37479.222: 99.7812% ( 8) 00:11:48.908 37479.222 - 37689.780: 99.8312% ( 8) 00:11:48.908 37689.780 - 37900.337: 99.8812% ( 8) 00:11:48.908 37900.337 - 38110.895: 99.9250% ( 7) 00:11:48.908 38110.895 - 38321.452: 99.9750% ( 8) 00:11:48.908 38321.452 - 38532.010: 100.0000% ( 4) 00:11:48.908 00:11:48.908 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:48.908 ============================================================================== 00:11:48.908 Range in us Cumulative IO count 00:11:48.908 5369.214 - 5395.534: 0.0063% ( 1) 00:11:48.908 5474.493 - 5500.813: 0.0125% ( 1) 00:11:48.908 5527.133 - 5553.452: 0.0187% ( 1) 00:11:48.908 5553.452 - 5579.772: 0.0312% ( 2) 00:11:48.908 5579.772 - 5606.092: 0.0500% ( 3) 00:11:48.908 5606.092 - 5632.411: 0.0813% ( 5) 00:11:48.908 5632.411 - 5658.731: 0.1125% ( 5) 00:11:48.908 5658.731 - 5685.051: 0.1875% ( 12) 00:11:48.908 5685.051 - 5711.370: 0.2625% ( 12) 00:11:48.908 5711.370 - 5737.690: 0.3250% ( 10) 00:11:48.908 5737.690 - 5764.010: 0.4188% ( 15) 00:11:48.908 5764.010 - 5790.329: 0.5813% ( 26) 00:11:48.908 5790.329 - 5816.649: 1.0125% ( 69) 00:11:48.908 5816.649 - 5842.969: 1.1812% ( 27) 00:11:48.908 5842.969 - 5869.288: 1.6500% ( 75) 00:11:48.908 5869.288 - 5895.608: 1.8687% ( 35) 00:11:48.908 5895.608 - 5921.928: 2.1437% ( 44) 00:11:48.908 5921.928 - 5948.247: 2.5562% ( 66) 00:11:48.908 5948.247 - 5974.567: 2.9813% ( 68) 00:11:48.908 5974.567 - 6000.887: 3.3937% ( 66) 00:11:48.908 6000.887 - 6027.206: 3.8375% ( 71) 00:11:48.908 6027.206 - 6053.526: 4.3125% ( 76) 00:11:48.908 6053.526 - 6079.846: 4.8063% ( 79) 00:11:48.908 6079.846 - 6106.165: 5.3812% ( 92) 00:11:48.908 6106.165 - 6132.485: 6.1875% ( 129) 00:11:48.908 6132.485 - 6158.805: 7.2250% ( 166) 00:11:48.908 6158.805 - 6185.124: 8.2188% ( 159) 00:11:48.908 6185.124 - 6211.444: 9.7000% ( 237) 00:11:48.908 6211.444 - 6237.764: 11.0563% ( 217) 00:11:48.908 6237.764 - 6264.084: 12.1813% ( 180) 00:11:48.908 6264.084 - 6290.403: 12.9812% ( 128) 00:11:48.908 6290.403 - 6316.723: 13.6438% ( 106) 00:11:48.908 6316.723 - 6343.043: 14.3438% ( 112) 00:11:48.908 6343.043 - 6369.362: 15.0875% ( 119) 00:11:48.908 6369.362 - 6395.682: 16.0625% ( 156) 00:11:48.908 6395.682 - 6422.002: 17.0688% ( 161) 00:11:48.908 6422.002 - 6448.321: 18.0437% ( 156) 00:11:48.908 6448.321 - 6474.641: 18.9812% ( 150) 00:11:48.908 6474.641 - 6500.961: 20.2000% ( 195) 00:11:48.908 6500.961 - 6527.280: 20.8500% ( 104) 00:11:48.908 6527.280 - 6553.600: 21.5063% ( 105) 00:11:48.908 6553.600 - 6579.920: 22.3062% ( 128) 00:11:48.908 6579.920 - 6606.239: 22.8938% ( 94) 00:11:48.908 6606.239 - 6632.559: 23.7312% ( 134) 00:11:48.908 6632.559 - 6658.879: 24.5000% ( 123) 00:11:48.908 6658.879 - 6685.198: 25.7688% ( 203) 00:11:48.908 6685.198 - 6711.518: 26.6250% ( 137) 00:11:48.908 6711.518 - 6737.838: 27.3938% ( 123) 00:11:48.908 6737.838 - 6790.477: 28.8250% ( 229) 00:11:48.908 6790.477 - 6843.116: 30.5000% ( 268) 00:11:48.908 6843.116 - 6895.756: 31.9563% ( 233) 00:11:48.908 6895.756 - 6948.395: 33.6938% ( 278) 00:11:48.908 6948.395 - 7001.035: 35.8188% ( 340) 00:11:48.908 7001.035 - 7053.674: 37.8563% ( 326) 00:11:48.908 7053.674 - 7106.313: 39.8375% ( 317) 00:11:48.908 7106.313 - 7158.953: 41.5938% ( 281) 00:11:48.908 7158.953 - 7211.592: 43.8250% ( 357) 00:11:48.908 7211.592 - 7264.231: 46.1000% ( 364) 00:11:48.908 7264.231 - 7316.871: 48.0438% ( 311) 00:11:48.908 7316.871 - 7369.510: 49.7250% ( 269) 00:11:48.908 7369.510 - 7422.149: 51.8500% ( 340) 00:11:48.908 7422.149 - 7474.789: 53.5125% ( 266) 00:11:48.908 7474.789 - 7527.428: 54.6750% ( 186) 00:11:48.908 7527.428 - 7580.067: 55.8687% ( 191) 00:11:48.908 7580.067 - 7632.707: 57.3000% ( 229) 00:11:48.908 7632.707 - 7685.346: 58.5000% ( 192) 00:11:48.908 7685.346 - 7737.986: 59.4125% ( 146) 00:11:48.908 7737.986 - 7790.625: 60.4500% ( 166) 00:11:48.908 7790.625 - 7843.264: 61.3563% ( 145) 00:11:48.908 7843.264 - 7895.904: 62.3125% ( 153) 00:11:48.908 7895.904 - 7948.543: 62.9375% ( 100) 00:11:48.908 7948.543 - 8001.182: 63.3875% ( 72) 00:11:48.908 8001.182 - 8053.822: 63.7188% ( 53) 00:11:48.908 8053.822 - 8106.461: 64.0375% ( 51) 00:11:48.908 8106.461 - 8159.100: 64.3937% ( 57) 00:11:48.908 8159.100 - 8211.740: 64.6625% ( 43) 00:11:48.908 8211.740 - 8264.379: 64.9000% ( 38) 00:11:48.908 8264.379 - 8317.018: 65.1188% ( 35) 00:11:48.908 8317.018 - 8369.658: 66.0375% ( 147) 00:11:48.908 8369.658 - 8422.297: 66.3750% ( 54) 00:11:48.908 8422.297 - 8474.937: 66.6688% ( 47) 00:11:48.908 8474.937 - 8527.576: 66.9500% ( 45) 00:11:48.908 8527.576 - 8580.215: 67.2500% ( 48) 00:11:48.908 8580.215 - 8632.855: 67.8563% ( 97) 00:11:48.908 8632.855 - 8685.494: 68.3500% ( 79) 00:11:48.908 8685.494 - 8738.133: 68.9500% ( 96) 00:11:48.908 8738.133 - 8790.773: 69.6437% ( 111) 00:11:48.908 8790.773 - 8843.412: 70.4625% ( 131) 00:11:48.908 8843.412 - 8896.051: 71.4562% ( 159) 00:11:48.908 8896.051 - 8948.691: 72.7625% ( 209) 00:11:48.908 8948.691 - 9001.330: 74.2000% ( 230) 00:11:48.908 9001.330 - 9053.969: 75.5687% ( 219) 00:11:48.908 9053.969 - 9106.609: 77.0062% ( 230) 00:11:48.908 9106.609 - 9159.248: 78.7938% ( 286) 00:11:48.908 9159.248 - 9211.888: 80.1750% ( 221) 00:11:48.908 9211.888 - 9264.527: 81.2500% ( 172) 00:11:48.908 9264.527 - 9317.166: 82.5375% ( 206) 00:11:48.908 9317.166 - 9369.806: 83.9062% ( 219) 00:11:48.908 9369.806 - 9422.445: 84.8500% ( 151) 00:11:48.908 9422.445 - 9475.084: 86.0125% ( 186) 00:11:48.908 9475.084 - 9527.724: 86.8812% ( 139) 00:11:48.908 9527.724 - 9580.363: 87.6875% ( 129) 00:11:48.908 9580.363 - 9633.002: 88.4313% ( 119) 00:11:48.908 9633.002 - 9685.642: 89.2875% ( 137) 00:11:48.908 9685.642 - 9738.281: 90.3000% ( 162) 00:11:48.908 9738.281 - 9790.920: 91.1688% ( 139) 00:11:48.908 9790.920 - 9843.560: 91.8875% ( 115) 00:11:48.909 9843.560 - 9896.199: 92.5375% ( 104) 00:11:48.909 9896.199 - 9948.839: 92.9688% ( 69) 00:11:48.909 9948.839 - 10001.478: 93.4313% ( 74) 00:11:48.909 10001.478 - 10054.117: 93.8063% ( 60) 00:11:48.909 10054.117 - 10106.757: 94.3812% ( 92) 00:11:48.909 10106.757 - 10159.396: 94.6875% ( 49) 00:11:48.909 10159.396 - 10212.035: 95.3812% ( 111) 00:11:48.909 10212.035 - 10264.675: 95.7812% ( 64) 00:11:48.909 10264.675 - 10317.314: 96.0500% ( 43) 00:11:48.909 10317.314 - 10369.953: 96.2812% ( 37) 00:11:48.909 10369.953 - 10422.593: 96.4875% ( 33) 00:11:48.909 10422.593 - 10475.232: 96.6625% ( 28) 00:11:48.909 10475.232 - 10527.871: 96.8000% ( 22) 00:11:48.909 10527.871 - 10580.511: 96.9437% ( 23) 00:11:48.909 10580.511 - 10633.150: 97.0625% ( 19) 00:11:48.909 10633.150 - 10685.790: 97.1750% ( 18) 00:11:48.909 10685.790 - 10738.429: 97.2750% ( 16) 00:11:48.909 10738.429 - 10791.068: 97.3500% ( 12) 00:11:48.909 10791.068 - 10843.708: 97.4125% ( 10) 00:11:48.909 10843.708 - 10896.347: 97.4375% ( 4) 00:11:48.909 10896.347 - 10948.986: 97.4875% ( 8) 00:11:48.909 10948.986 - 11001.626: 97.5375% ( 8) 00:11:48.909 11001.626 - 11054.265: 97.5812% ( 7) 00:11:48.909 11054.265 - 11106.904: 97.6375% ( 9) 00:11:48.909 11106.904 - 11159.544: 97.6875% ( 8) 00:11:48.909 11159.544 - 11212.183: 97.7375% ( 8) 00:11:48.909 11212.183 - 11264.822: 97.8438% ( 17) 00:11:48.909 11264.822 - 11317.462: 97.9313% ( 14) 00:11:48.909 11317.462 - 11370.101: 98.0125% ( 13) 00:11:48.909 11370.101 - 11422.741: 98.1063% ( 15) 00:11:48.909 11422.741 - 11475.380: 98.1937% ( 14) 00:11:48.909 11475.380 - 11528.019: 98.3000% ( 17) 00:11:48.909 11528.019 - 11580.659: 98.4000% ( 16) 00:11:48.909 11580.659 - 11633.298: 98.5125% ( 18) 00:11:48.909 11633.298 - 11685.937: 98.6063% ( 15) 00:11:48.909 11685.937 - 11738.577: 98.6625% ( 9) 00:11:48.909 11738.577 - 11791.216: 98.7250% ( 10) 00:11:48.909 11791.216 - 11843.855: 98.7875% ( 10) 00:11:48.909 11843.855 - 11896.495: 98.8438% ( 9) 00:11:48.909 11896.495 - 11949.134: 98.8812% ( 6) 00:11:48.909 11949.134 - 12001.773: 98.9250% ( 7) 00:11:48.909 12001.773 - 12054.413: 98.9562% ( 5) 00:11:48.909 12054.413 - 12107.052: 98.9813% ( 4) 00:11:48.909 12107.052 - 12159.692: 98.9875% ( 1) 00:11:48.909 12159.692 - 12212.331: 99.0062% ( 3) 00:11:48.909 12212.331 - 12264.970: 99.0250% ( 3) 00:11:48.909 12264.970 - 12317.610: 99.0438% ( 3) 00:11:48.909 12317.610 - 12370.249: 99.0563% ( 2) 00:11:48.909 12370.249 - 12422.888: 99.0812% ( 4) 00:11:48.909 12422.888 - 12475.528: 99.1063% ( 4) 00:11:48.909 12475.528 - 12528.167: 99.1250% ( 3) 00:11:48.909 12528.167 - 12580.806: 99.1500% ( 4) 00:11:48.909 12580.806 - 12633.446: 99.1750% ( 4) 00:11:48.909 12633.446 - 12686.085: 99.1937% ( 3) 00:11:48.909 12686.085 - 12738.724: 99.2000% ( 1) 00:11:48.909 33689.189 - 33899.746: 99.2125% ( 2) 00:11:48.909 33899.746 - 34110.304: 99.2625% ( 8) 00:11:48.909 34110.304 - 34320.861: 99.3125% ( 8) 00:11:48.909 34320.861 - 34531.418: 99.3625% ( 8) 00:11:48.909 34531.418 - 34741.976: 99.4125% ( 8) 00:11:48.909 34741.976 - 34952.533: 99.4688% ( 9) 00:11:48.909 34952.533 - 35163.091: 99.5187% ( 8) 00:11:48.909 35163.091 - 35373.648: 99.5687% ( 8) 00:11:48.909 35373.648 - 35584.206: 99.6188% ( 8) 00:11:48.909 35584.206 - 35794.763: 99.6688% ( 8) 00:11:48.909 35794.763 - 36005.320: 99.7188% ( 8) 00:11:48.909 36005.320 - 36215.878: 99.7625% ( 7) 00:11:48.909 36215.878 - 36426.435: 99.8125% ( 8) 00:11:48.909 36426.435 - 36636.993: 99.8688% ( 9) 00:11:48.909 36636.993 - 36847.550: 99.9188% ( 8) 00:11:48.909 36847.550 - 37058.108: 99.9750% ( 9) 00:11:48.909 37058.108 - 37268.665: 100.0000% ( 4) 00:11:48.909 00:11:48.909 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:48.909 ============================================================================== 00:11:48.909 Range in us Cumulative IO count 00:11:48.909 5369.214 - 5395.534: 0.0063% ( 1) 00:11:48.909 5395.534 - 5421.854: 0.0187% ( 2) 00:11:48.909 5448.173 - 5474.493: 0.0250% ( 1) 00:11:48.909 5553.452 - 5579.772: 0.0375% ( 2) 00:11:48.909 5579.772 - 5606.092: 0.0437% ( 1) 00:11:48.909 5606.092 - 5632.411: 0.0813% ( 6) 00:11:48.909 5632.411 - 5658.731: 0.1750% ( 15) 00:11:48.909 5658.731 - 5685.051: 0.2875% ( 18) 00:11:48.909 5685.051 - 5711.370: 0.4000% ( 18) 00:11:48.909 5711.370 - 5737.690: 0.5375% ( 22) 00:11:48.909 5737.690 - 5764.010: 0.6500% ( 18) 00:11:48.909 5764.010 - 5790.329: 0.8000% ( 24) 00:11:48.909 5790.329 - 5816.649: 0.9688% ( 27) 00:11:48.909 5816.649 - 5842.969: 1.1562% ( 30) 00:11:48.909 5842.969 - 5869.288: 1.4250% ( 43) 00:11:48.909 5869.288 - 5895.608: 1.8938% ( 75) 00:11:48.909 5895.608 - 5921.928: 2.4500% ( 89) 00:11:48.909 5921.928 - 5948.247: 2.8250% ( 60) 00:11:48.909 5948.247 - 5974.567: 3.2625% ( 70) 00:11:48.909 5974.567 - 6000.887: 3.7250% ( 74) 00:11:48.909 6000.887 - 6027.206: 4.1875% ( 74) 00:11:48.909 6027.206 - 6053.526: 4.7188% ( 85) 00:11:48.909 6053.526 - 6079.846: 5.3688% ( 104) 00:11:48.909 6079.846 - 6106.165: 6.0250% ( 105) 00:11:48.909 6106.165 - 6132.485: 6.7750% ( 120) 00:11:48.909 6132.485 - 6158.805: 7.5062% ( 117) 00:11:48.909 6158.805 - 6185.124: 8.2625% ( 121) 00:11:48.909 6185.124 - 6211.444: 8.9437% ( 109) 00:11:48.909 6211.444 - 6237.764: 9.9000% ( 153) 00:11:48.909 6237.764 - 6264.084: 10.8812% ( 157) 00:11:48.909 6264.084 - 6290.403: 11.7562% ( 140) 00:11:48.909 6290.403 - 6316.723: 12.5813% ( 132) 00:11:48.909 6316.723 - 6343.043: 13.7500% ( 187) 00:11:48.909 6343.043 - 6369.362: 14.6000% ( 136) 00:11:48.909 6369.362 - 6395.682: 15.4000% ( 128) 00:11:48.909 6395.682 - 6422.002: 16.2375% ( 134) 00:11:48.909 6422.002 - 6448.321: 17.0688% ( 133) 00:11:48.909 6448.321 - 6474.641: 17.7875% ( 115) 00:11:48.909 6474.641 - 6500.961: 18.6187% ( 133) 00:11:48.909 6500.961 - 6527.280: 19.7063% ( 174) 00:11:48.909 6527.280 - 6553.600: 20.4125% ( 113) 00:11:48.909 6553.600 - 6579.920: 21.1875% ( 124) 00:11:48.909 6579.920 - 6606.239: 22.2563% ( 171) 00:11:48.909 6606.239 - 6632.559: 22.8062% ( 88) 00:11:48.909 6632.559 - 6658.879: 23.6875% ( 141) 00:11:48.909 6658.879 - 6685.198: 24.4250% ( 118) 00:11:48.909 6685.198 - 6711.518: 24.9937% ( 91) 00:11:48.909 6711.518 - 6737.838: 25.5000% ( 81) 00:11:48.909 6737.838 - 6790.477: 26.7250% ( 196) 00:11:48.909 6790.477 - 6843.116: 28.5000% ( 284) 00:11:48.909 6843.116 - 6895.756: 30.1750% ( 268) 00:11:48.909 6895.756 - 6948.395: 32.1562% ( 317) 00:11:48.909 6948.395 - 7001.035: 34.2188% ( 330) 00:11:48.909 7001.035 - 7053.674: 36.3563% ( 342) 00:11:48.909 7053.674 - 7106.313: 38.6125% ( 361) 00:11:48.909 7106.313 - 7158.953: 41.5375% ( 468) 00:11:48.909 7158.953 - 7211.592: 44.4688% ( 469) 00:11:48.909 7211.592 - 7264.231: 46.8375% ( 379) 00:11:48.909 7264.231 - 7316.871: 48.9438% ( 337) 00:11:48.909 7316.871 - 7369.510: 50.7250% ( 285) 00:11:48.909 7369.510 - 7422.149: 52.3312% ( 257) 00:11:48.910 7422.149 - 7474.789: 54.1063% ( 284) 00:11:48.910 7474.789 - 7527.428: 55.7687% ( 266) 00:11:48.910 7527.428 - 7580.067: 56.9062% ( 182) 00:11:48.910 7580.067 - 7632.707: 58.5250% ( 259) 00:11:48.910 7632.707 - 7685.346: 59.4625% ( 150) 00:11:48.910 7685.346 - 7737.986: 60.2313% ( 123) 00:11:48.910 7737.986 - 7790.625: 60.9625% ( 117) 00:11:48.910 7790.625 - 7843.264: 61.4750% ( 82) 00:11:48.910 7843.264 - 7895.904: 62.1125% ( 102) 00:11:48.910 7895.904 - 7948.543: 62.4875% ( 60) 00:11:48.910 7948.543 - 8001.182: 62.7687% ( 45) 00:11:48.910 8001.182 - 8053.822: 63.0438% ( 44) 00:11:48.910 8053.822 - 8106.461: 63.3563% ( 50) 00:11:48.910 8106.461 - 8159.100: 63.6500% ( 47) 00:11:48.910 8159.100 - 8211.740: 64.0062% ( 57) 00:11:48.910 8211.740 - 8264.379: 64.3063% ( 48) 00:11:48.910 8264.379 - 8317.018: 64.6125% ( 49) 00:11:48.910 8317.018 - 8369.658: 65.2875% ( 108) 00:11:48.910 8369.658 - 8422.297: 65.7313% ( 71) 00:11:48.910 8422.297 - 8474.937: 66.0687% ( 54) 00:11:48.910 8474.937 - 8527.576: 66.4625% ( 63) 00:11:48.910 8527.576 - 8580.215: 67.0875% ( 100) 00:11:48.910 8580.215 - 8632.855: 67.8438% ( 121) 00:11:48.910 8632.855 - 8685.494: 68.4375% ( 95) 00:11:48.910 8685.494 - 8738.133: 69.1937% ( 121) 00:11:48.910 8738.133 - 8790.773: 69.9437% ( 120) 00:11:48.910 8790.773 - 8843.412: 71.2062% ( 202) 00:11:48.910 8843.412 - 8896.051: 72.2250% ( 163) 00:11:48.910 8896.051 - 8948.691: 73.3937% ( 187) 00:11:48.910 8948.691 - 9001.330: 74.5875% ( 191) 00:11:48.910 9001.330 - 9053.969: 75.7562% ( 187) 00:11:48.910 9053.969 - 9106.609: 77.2313% ( 236) 00:11:48.910 9106.609 - 9159.248: 78.6688% ( 230) 00:11:48.910 9159.248 - 9211.888: 79.7375% ( 171) 00:11:48.910 9211.888 - 9264.527: 81.2000% ( 234) 00:11:48.910 9264.527 - 9317.166: 82.5875% ( 222) 00:11:48.910 9317.166 - 9369.806: 83.4750% ( 142) 00:11:48.910 9369.806 - 9422.445: 84.3937% ( 147) 00:11:48.910 9422.445 - 9475.084: 85.3812% ( 158) 00:11:48.910 9475.084 - 9527.724: 86.5438% ( 186) 00:11:48.910 9527.724 - 9580.363: 87.4062% ( 138) 00:11:48.910 9580.363 - 9633.002: 88.0563% ( 104) 00:11:48.910 9633.002 - 9685.642: 88.9125% ( 137) 00:11:48.910 9685.642 - 9738.281: 89.6625% ( 120) 00:11:48.910 9738.281 - 9790.920: 90.4250% ( 122) 00:11:48.910 9790.920 - 9843.560: 90.9875% ( 90) 00:11:48.910 9843.560 - 9896.199: 91.7625% ( 124) 00:11:48.910 9896.199 - 9948.839: 92.2562% ( 79) 00:11:48.910 9948.839 - 10001.478: 92.8063% ( 88) 00:11:48.910 10001.478 - 10054.117: 93.5062% ( 112) 00:11:48.910 10054.117 - 10106.757: 94.1125% ( 97) 00:11:48.910 10106.757 - 10159.396: 94.5375% ( 68) 00:11:48.910 10159.396 - 10212.035: 94.8688% ( 53) 00:11:48.910 10212.035 - 10264.675: 95.4875% ( 99) 00:11:48.910 10264.675 - 10317.314: 95.8625% ( 60) 00:11:48.910 10317.314 - 10369.953: 96.1562% ( 47) 00:11:48.910 10369.953 - 10422.593: 96.4875% ( 53) 00:11:48.910 10422.593 - 10475.232: 96.8750% ( 62) 00:11:48.910 10475.232 - 10527.871: 97.0875% ( 34) 00:11:48.910 10527.871 - 10580.511: 97.2500% ( 26) 00:11:48.910 10580.511 - 10633.150: 97.4000% ( 24) 00:11:48.910 10633.150 - 10685.790: 97.5563% ( 25) 00:11:48.910 10685.790 - 10738.429: 97.7125% ( 25) 00:11:48.910 10738.429 - 10791.068: 97.8312% ( 19) 00:11:48.910 10791.068 - 10843.708: 97.9125% ( 13) 00:11:48.910 10843.708 - 10896.347: 98.0062% ( 15) 00:11:48.910 10896.347 - 10948.986: 98.0812% ( 12) 00:11:48.910 10948.986 - 11001.626: 98.1437% ( 10) 00:11:48.910 11001.626 - 11054.265: 98.1875% ( 7) 00:11:48.910 11054.265 - 11106.904: 98.2625% ( 12) 00:11:48.910 11106.904 - 11159.544: 98.3375% ( 12) 00:11:48.910 11159.544 - 11212.183: 98.4062% ( 11) 00:11:48.910 11212.183 - 11264.822: 98.4688% ( 10) 00:11:48.910 11264.822 - 11317.462: 98.5250% ( 9) 00:11:48.910 11317.462 - 11370.101: 98.6063% ( 13) 00:11:48.910 11370.101 - 11422.741: 98.6813% ( 12) 00:11:48.910 11422.741 - 11475.380: 98.7438% ( 10) 00:11:48.910 11475.380 - 11528.019: 98.8187% ( 12) 00:11:48.910 11528.019 - 11580.659: 98.8688% ( 8) 00:11:48.910 11580.659 - 11633.298: 98.8937% ( 4) 00:11:48.910 11633.298 - 11685.937: 98.9125% ( 3) 00:11:48.910 11685.937 - 11738.577: 98.9313% ( 3) 00:11:48.910 11738.577 - 11791.216: 98.9437% ( 2) 00:11:48.910 11791.216 - 11843.855: 98.9625% ( 3) 00:11:48.910 11843.855 - 11896.495: 98.9813% ( 3) 00:11:48.910 11896.495 - 11949.134: 98.9938% ( 2) 00:11:48.910 11949.134 - 12001.773: 99.0125% ( 3) 00:11:48.910 12001.773 - 12054.413: 99.0250% ( 2) 00:11:48.910 12054.413 - 12107.052: 99.0375% ( 2) 00:11:48.910 12107.052 - 12159.692: 99.0563% ( 3) 00:11:48.910 12159.692 - 12212.331: 99.0687% ( 2) 00:11:48.910 12212.331 - 12264.970: 99.0875% ( 3) 00:11:48.910 12264.970 - 12317.610: 99.1063% ( 3) 00:11:48.910 12317.610 - 12370.249: 99.1250% ( 3) 00:11:48.910 12370.249 - 12422.888: 99.1437% ( 3) 00:11:48.910 12422.888 - 12475.528: 99.1688% ( 4) 00:11:48.910 12475.528 - 12528.167: 99.1937% ( 4) 00:11:48.910 12528.167 - 12580.806: 99.2000% ( 1) 00:11:48.910 32215.287 - 32425.844: 99.2313% ( 5) 00:11:48.910 32425.844 - 32636.402: 99.2812% ( 8) 00:11:48.910 32636.402 - 32846.959: 99.3312% ( 8) 00:11:48.910 32846.959 - 33057.516: 99.3812% ( 8) 00:11:48.910 33057.516 - 33268.074: 99.4375% ( 9) 00:11:48.910 33268.074 - 33478.631: 99.4875% ( 8) 00:11:48.910 33478.631 - 33689.189: 99.5375% ( 8) 00:11:48.910 33689.189 - 33899.746: 99.5875% ( 8) 00:11:48.910 33899.746 - 34110.304: 99.6312% ( 7) 00:11:48.910 34110.304 - 34320.861: 99.6813% ( 8) 00:11:48.910 34320.861 - 34531.418: 99.7313% ( 8) 00:11:48.910 34531.418 - 34741.976: 99.7875% ( 9) 00:11:48.910 34741.976 - 34952.533: 99.8375% ( 8) 00:11:48.910 34952.533 - 35163.091: 99.8875% ( 8) 00:11:48.910 35163.091 - 35373.648: 99.9313% ( 7) 00:11:48.910 35373.648 - 35584.206: 99.9875% ( 9) 00:11:48.910 35584.206 - 35794.763: 100.0000% ( 2) 00:11:48.910 00:11:48.910 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:48.910 ============================================================================== 00:11:48.910 Range in us Cumulative IO count 00:11:48.910 5053.378 - 5079.698: 0.0062% ( 1) 00:11:48.910 5184.977 - 5211.296: 0.0124% ( 1) 00:11:48.910 5316.575 - 5342.895: 0.0248% ( 2) 00:11:48.910 5342.895 - 5369.214: 0.0434% ( 3) 00:11:48.910 5369.214 - 5395.534: 0.0806% ( 6) 00:11:48.910 5395.534 - 5421.854: 0.1116% ( 5) 00:11:48.910 5421.854 - 5448.173: 0.1364% ( 4) 00:11:48.910 5448.173 - 5474.493: 0.1922% ( 9) 00:11:48.910 5474.493 - 5500.813: 0.2480% ( 9) 00:11:48.910 5500.813 - 5527.133: 0.3100% ( 10) 00:11:48.910 5527.133 - 5553.452: 0.3844% ( 12) 00:11:48.910 5553.452 - 5579.772: 0.4712% ( 14) 00:11:48.910 5579.772 - 5606.092: 0.5518% ( 13) 00:11:48.910 5606.092 - 5632.411: 0.7440% ( 31) 00:11:48.910 5632.411 - 5658.731: 1.0045% ( 42) 00:11:48.910 5658.731 - 5685.051: 1.2029% ( 32) 00:11:48.910 5685.051 - 5711.370: 1.2773% ( 12) 00:11:48.910 5711.370 - 5737.690: 1.3579% ( 13) 00:11:48.910 5737.690 - 5764.010: 1.4881% ( 21) 00:11:48.910 5764.010 - 5790.329: 1.6121% ( 20) 00:11:48.910 5790.329 - 5816.649: 1.8415% ( 37) 00:11:48.910 5816.649 - 5842.969: 2.0213% ( 29) 00:11:48.910 5842.969 - 5869.288: 2.2197% ( 32) 00:11:48.910 5869.288 - 5895.608: 2.4368% ( 35) 00:11:48.910 5895.608 - 5921.928: 2.7158% ( 45) 00:11:48.910 5921.928 - 5948.247: 3.3668% ( 105) 00:11:48.910 5948.247 - 5974.567: 3.6706% ( 49) 00:11:48.910 5974.567 - 6000.887: 3.9745% ( 49) 00:11:48.910 6000.887 - 6027.206: 4.3837% ( 66) 00:11:48.910 6027.206 - 6053.526: 4.9851% ( 97) 00:11:48.910 6053.526 - 6079.846: 5.5060% ( 84) 00:11:48.910 6079.846 - 6106.165: 6.1012% ( 96) 00:11:48.910 6106.165 - 6132.485: 6.9754% ( 141) 00:11:48.910 6132.485 - 6158.805: 7.5893% ( 99) 00:11:48.910 6158.805 - 6185.124: 8.5007% ( 147) 00:11:48.910 6185.124 - 6211.444: 9.5424% ( 168) 00:11:48.910 6211.444 - 6237.764: 10.4353% ( 144) 00:11:48.910 6237.764 - 6264.084: 11.4645% ( 166) 00:11:48.910 6264.084 - 6290.403: 12.1714% ( 114) 00:11:48.910 6290.403 - 6316.723: 12.9464% ( 125) 00:11:48.910 6316.723 - 6343.043: 14.0129% ( 172) 00:11:48.910 6343.043 - 6369.362: 14.8251% ( 131) 00:11:48.910 6369.362 - 6395.682: 15.6064% ( 126) 00:11:48.910 6395.682 - 6422.002: 16.3132% ( 114) 00:11:48.910 6422.002 - 6448.321: 17.0573% ( 120) 00:11:48.910 6448.321 - 6474.641: 18.0556% ( 161) 00:11:48.910 6474.641 - 6500.961: 19.0104% ( 154) 00:11:48.910 6500.961 - 6527.280: 19.8351% ( 133) 00:11:48.910 6527.280 - 6553.600: 20.6225% ( 127) 00:11:48.910 6553.600 - 6579.920: 21.4472% ( 133) 00:11:48.910 6579.920 - 6606.239: 22.3214% ( 141) 00:11:48.910 6606.239 - 6632.559: 22.9911% ( 108) 00:11:48.911 6632.559 - 6658.879: 23.7475% ( 122) 00:11:48.911 6658.879 - 6685.198: 24.4544% ( 114) 00:11:48.911 6685.198 - 6711.518: 25.2542% ( 129) 00:11:48.911 6711.518 - 6737.838: 25.9053% ( 105) 00:11:48.911 6737.838 - 6790.477: 27.3748% ( 237) 00:11:48.911 6790.477 - 6843.116: 29.0303% ( 267) 00:11:48.911 6843.116 - 6895.756: 31.1446% ( 341) 00:11:48.911 6895.756 - 6948.395: 33.0605% ( 309) 00:11:48.911 6948.395 - 7001.035: 35.2617% ( 355) 00:11:48.911 7001.035 - 7053.674: 37.2210% ( 316) 00:11:48.911 7053.674 - 7106.313: 39.6639% ( 394) 00:11:48.911 7106.313 - 7158.953: 41.9643% ( 371) 00:11:48.911 7158.953 - 7211.592: 44.1964% ( 360) 00:11:48.911 7211.592 - 7264.231: 46.4224% ( 359) 00:11:48.911 7264.231 - 7316.871: 48.3011% ( 303) 00:11:48.911 7316.871 - 7369.510: 49.7210% ( 229) 00:11:48.911 7369.510 - 7422.149: 51.2215% ( 242) 00:11:48.911 7422.149 - 7474.789: 53.0010% ( 287) 00:11:48.911 7474.789 - 7527.428: 54.2039% ( 194) 00:11:48.911 7527.428 - 7580.067: 55.2827% ( 174) 00:11:48.911 7580.067 - 7632.707: 56.3554% ( 173) 00:11:48.911 7632.707 - 7685.346: 57.3909% ( 167) 00:11:48.911 7685.346 - 7737.986: 58.4573% ( 172) 00:11:48.911 7737.986 - 7790.625: 59.2200% ( 123) 00:11:48.911 7790.625 - 7843.264: 59.7594% ( 87) 00:11:48.911 7843.264 - 7895.904: 60.2431% ( 78) 00:11:48.911 7895.904 - 7948.543: 60.7515% ( 82) 00:11:48.911 7948.543 - 8001.182: 61.2971% ( 88) 00:11:48.911 8001.182 - 8053.822: 61.7498% ( 73) 00:11:48.911 8053.822 - 8106.461: 62.0040% ( 41) 00:11:48.911 8106.461 - 8159.100: 62.2148% ( 34) 00:11:48.911 8159.100 - 8211.740: 62.4380% ( 36) 00:11:48.911 8211.740 - 8264.379: 63.0642% ( 101) 00:11:48.911 8264.379 - 8317.018: 63.4115% ( 56) 00:11:48.911 8317.018 - 8369.658: 63.9323% ( 84) 00:11:48.911 8369.658 - 8422.297: 64.5523% ( 100) 00:11:48.911 8422.297 - 8474.937: 64.9740% ( 68) 00:11:48.911 8474.937 - 8527.576: 65.4886% ( 83) 00:11:48.911 8527.576 - 8580.215: 65.9412% ( 73) 00:11:48.911 8580.215 - 8632.855: 66.6977% ( 122) 00:11:48.911 8632.855 - 8685.494: 67.6401% ( 152) 00:11:48.911 8685.494 - 8738.133: 68.8802% ( 200) 00:11:48.911 8738.133 - 8790.773: 69.8475% ( 156) 00:11:48.911 8790.773 - 8843.412: 70.8271% ( 158) 00:11:48.911 8843.412 - 8896.051: 71.8750% ( 169) 00:11:48.911 8896.051 - 8948.691: 73.2577% ( 223) 00:11:48.911 8948.691 - 9001.330: 74.7768% ( 245) 00:11:48.911 9001.330 - 9053.969: 76.4385% ( 268) 00:11:48.911 9053.969 - 9106.609: 77.6910% ( 202) 00:11:48.911 9106.609 - 9159.248: 78.6272% ( 151) 00:11:48.911 9159.248 - 9211.888: 79.6689% ( 168) 00:11:48.911 9211.888 - 9264.527: 80.4750% ( 130) 00:11:48.911 9264.527 - 9317.166: 81.4856% ( 163) 00:11:48.911 9317.166 - 9369.806: 82.8559% ( 221) 00:11:48.911 9369.806 - 9422.445: 83.9286% ( 173) 00:11:48.911 9422.445 - 9475.084: 85.0384% ( 179) 00:11:48.911 9475.084 - 9527.724: 85.8817% ( 136) 00:11:48.911 9527.724 - 9580.363: 86.6815% ( 129) 00:11:48.911 9580.363 - 9633.002: 87.4628% ( 126) 00:11:48.911 9633.002 - 9685.642: 88.5355% ( 173) 00:11:48.911 9685.642 - 9738.281: 89.6515% ( 180) 00:11:48.911 9738.281 - 9790.920: 90.3894% ( 119) 00:11:48.911 9790.920 - 9843.560: 91.2078% ( 132) 00:11:48.911 9843.560 - 9896.199: 91.9333% ( 117) 00:11:48.911 9896.199 - 9948.839: 92.6339% ( 113) 00:11:48.911 9948.839 - 10001.478: 93.2850% ( 105) 00:11:48.911 10001.478 - 10054.117: 93.6880% ( 65) 00:11:48.911 10054.117 - 10106.757: 94.0848% ( 64) 00:11:48.911 10106.757 - 10159.396: 94.4630% ( 61) 00:11:48.911 10159.396 - 10212.035: 95.0521% ( 95) 00:11:48.911 10212.035 - 10264.675: 95.5977% ( 88) 00:11:48.911 10264.675 - 10317.314: 95.9511% ( 57) 00:11:48.911 10317.314 - 10369.953: 96.1930% ( 39) 00:11:48.911 10369.953 - 10422.593: 96.4100% ( 35) 00:11:48.911 10422.593 - 10475.232: 96.6084% ( 32) 00:11:48.911 10475.232 - 10527.871: 96.8192% ( 34) 00:11:48.911 10527.871 - 10580.511: 96.9308% ( 18) 00:11:48.911 10580.511 - 10633.150: 97.0610% ( 21) 00:11:48.911 10633.150 - 10685.790: 97.1850% ( 20) 00:11:48.911 10685.790 - 10738.429: 97.3152% ( 21) 00:11:48.911 10738.429 - 10791.068: 97.4702% ( 25) 00:11:48.911 10791.068 - 10843.708: 97.9787% ( 82) 00:11:48.911 10843.708 - 10896.347: 98.1461% ( 27) 00:11:48.911 10896.347 - 10948.986: 98.2887% ( 23) 00:11:48.911 10948.986 - 11001.626: 98.4127% ( 20) 00:11:48.911 11001.626 - 11054.265: 98.5181% ( 17) 00:11:48.911 11054.265 - 11106.904: 98.6297% ( 18) 00:11:48.911 11106.904 - 11159.544: 98.7289% ( 16) 00:11:48.911 11159.544 - 11212.183: 98.8157% ( 14) 00:11:48.911 11212.183 - 11264.822: 98.8591% ( 7) 00:11:48.911 11264.822 - 11317.462: 98.8777% ( 3) 00:11:48.911 11317.462 - 11370.101: 98.8963% ( 3) 00:11:48.911 11370.101 - 11422.741: 98.9087% ( 2) 00:11:48.911 11422.741 - 11475.380: 98.9273% ( 3) 00:11:48.911 11475.380 - 11528.019: 98.9459% ( 3) 00:11:48.911 11528.019 - 11580.659: 98.9583% ( 2) 00:11:48.911 11580.659 - 11633.298: 98.9707% ( 2) 00:11:48.911 11633.298 - 11685.937: 98.9893% ( 3) 00:11:48.911 11685.937 - 11738.577: 99.0079% ( 3) 00:11:48.911 11738.577 - 11791.216: 99.0203% ( 2) 00:11:48.911 11791.216 - 11843.855: 99.0327% ( 2) 00:11:48.911 11843.855 - 11896.495: 99.0513% ( 3) 00:11:48.911 11896.495 - 11949.134: 99.0699% ( 3) 00:11:48.911 11949.134 - 12001.773: 99.0947% ( 4) 00:11:48.911 12001.773 - 12054.413: 99.1195% ( 4) 00:11:48.911 12054.413 - 12107.052: 99.1381% ( 3) 00:11:48.911 12107.052 - 12159.692: 99.1629% ( 4) 00:11:48.911 12159.692 - 12212.331: 99.1877% ( 4) 00:11:48.911 12212.331 - 12264.970: 99.2063% ( 3) 00:11:48.911 22108.530 - 22213.809: 99.2125% ( 1) 00:11:48.911 22213.809 - 22319.088: 99.2436% ( 5) 00:11:48.911 22319.088 - 22424.366: 99.2684% ( 4) 00:11:48.911 22424.366 - 22529.645: 99.2932% ( 4) 00:11:48.911 22529.645 - 22634.924: 99.3180% ( 4) 00:11:48.911 22634.924 - 22740.202: 99.3428% ( 4) 00:11:48.911 22740.202 - 22845.481: 99.3676% ( 4) 00:11:48.911 22845.481 - 22950.760: 99.3986% ( 5) 00:11:48.911 22950.760 - 23056.039: 99.4172% ( 3) 00:11:48.911 23056.039 - 23161.317: 99.4482% ( 5) 00:11:48.911 23161.317 - 23266.596: 99.4730% ( 4) 00:11:48.911 23266.596 - 23371.875: 99.4916% ( 3) 00:11:48.911 23371.875 - 23477.153: 99.5164% ( 4) 00:11:48.911 23477.153 - 23582.432: 99.5412% ( 4) 00:11:48.911 23582.432 - 23687.711: 99.5660% ( 4) 00:11:48.911 23687.711 - 23792.990: 99.5908% ( 4) 00:11:48.911 23792.990 - 23898.268: 99.6156% ( 4) 00:11:48.911 23898.268 - 24003.547: 99.6404% ( 4) 00:11:48.911 24003.547 - 24108.826: 99.6652% ( 4) 00:11:48.911 24108.826 - 24214.104: 99.6900% ( 4) 00:11:48.911 24214.104 - 24319.383: 99.7148% ( 4) 00:11:48.911 24319.383 - 24424.662: 99.7458% ( 5) 00:11:48.911 24424.662 - 24529.941: 99.7706% ( 4) 00:11:48.911 24529.941 - 24635.219: 99.7892% ( 3) 00:11:48.911 24635.219 - 24740.498: 99.8140% ( 4) 00:11:48.911 24740.498 - 24845.777: 99.8388% ( 4) 00:11:48.911 24845.777 - 24951.055: 99.8698% ( 5) 00:11:48.911 24951.055 - 25056.334: 99.8884% ( 3) 00:11:48.911 25056.334 - 25161.613: 99.9194% ( 5) 00:11:48.911 25161.613 - 25266.892: 99.9442% ( 4) 00:11:48.911 25266.892 - 25372.170: 99.9690% ( 4) 00:11:48.911 25372.170 - 25477.449: 99.9876% ( 3) 00:11:48.911 25477.449 - 25582.728: 100.0000% ( 2) 00:11:48.911 00:11:48.911 23:19:40 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:48.911 00:11:48.911 real 0m2.827s 00:11:48.911 user 0m2.423s 00:11:48.911 sys 0m0.301s 00:11:48.911 23:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.911 ************************************ 00:11:48.911 END TEST nvme_perf 00:11:48.911 ************************************ 00:11:48.911 23:19:40 -- common/autotest_common.sh@10 -- # set +x 00:11:48.911 23:19:40 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:48.911 23:19:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:48.911 23:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.911 23:19:40 -- common/autotest_common.sh@10 -- # set +x 00:11:48.911 ************************************ 00:11:48.911 START TEST nvme_hello_world 00:11:48.911 ************************************ 00:11:48.911 23:19:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:49.493 Initializing NVMe Controllers 00:11:49.493 Attached to 0000:00:06.0 00:11:49.493 Namespace ID: 1 size: 6GB 00:11:49.493 Attached to 0000:00:07.0 00:11:49.493 Namespace ID: 1 size: 5GB 00:11:49.493 Attached to 0000:00:09.0 00:11:49.493 Namespace ID: 1 size: 1GB 00:11:49.493 Attached to 0000:00:08.0 00:11:49.493 Namespace ID: 1 size: 4GB 00:11:49.493 Namespace ID: 2 size: 4GB 00:11:49.493 Namespace ID: 3 size: 4GB 00:11:49.493 Initialization complete. 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 INFO: using host memory buffer for IO 00:11:49.493 Hello world! 00:11:49.493 00:11:49.493 real 0m0.347s 00:11:49.493 user 0m0.179s 00:11:49.493 sys 0m0.127s 00:11:49.493 23:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.493 23:19:40 -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 ************************************ 00:11:49.493 END TEST nvme_hello_world 00:11:49.493 ************************************ 00:11:49.493 23:19:41 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:49.493 23:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:49.493 23:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.493 23:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:49.493 ************************************ 00:11:49.493 START TEST nvme_sgl 00:11:49.493 ************************************ 00:11:49.493 23:19:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:49.806 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:49.806 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:49.806 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:49.806 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:49.806 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:49.806 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:11:49.806 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:11:49.806 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:11:49.806 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:11:49.806 NVMe Readv/Writev Request test 00:11:49.806 Attached to 0000:00:06.0 00:11:49.806 Attached to 0000:00:07.0 00:11:49.806 Attached to 0000:00:09.0 00:11:49.806 Attached to 0000:00:08.0 00:11:49.806 0000:00:06.0: build_io_request_2 test passed 00:11:49.806 0000:00:06.0: build_io_request_4 test passed 00:11:49.806 0000:00:06.0: build_io_request_5 test passed 00:11:49.806 0000:00:06.0: build_io_request_6 test passed 00:11:49.806 0000:00:06.0: build_io_request_7 test passed 00:11:49.806 0000:00:06.0: build_io_request_10 test passed 00:11:49.806 0000:00:07.0: build_io_request_2 test passed 00:11:49.806 0000:00:07.0: build_io_request_4 test passed 00:11:49.806 0000:00:07.0: build_io_request_5 test passed 00:11:49.806 0000:00:07.0: build_io_request_6 test passed 00:11:49.806 0000:00:07.0: build_io_request_7 test passed 00:11:49.806 0000:00:07.0: build_io_request_10 test passed 00:11:49.806 Cleaning up... 00:11:49.806 00:11:49.806 real 0m0.491s 00:11:49.806 user 0m0.327s 00:11:49.806 sys 0m0.122s 00:11:49.806 23:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.806 23:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:49.806 ************************************ 00:11:49.806 END TEST nvme_sgl 00:11:49.806 ************************************ 00:11:50.075 23:19:41 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:50.075 23:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:50.075 23:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.075 23:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:50.075 ************************************ 00:11:50.075 START TEST nvme_e2edp 00:11:50.075 ************************************ 00:11:50.075 23:19:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:50.075 NVMe Write/Read with End-to-End data protection test 00:11:50.075 Attached to 0000:00:06.0 00:11:50.075 Attached to 0000:00:07.0 00:11:50.075 Attached to 0000:00:09.0 00:11:50.075 Attached to 0000:00:08.0 00:11:50.075 Cleaning up... 00:11:50.075 00:11:50.075 real 0m0.247s 00:11:50.075 user 0m0.080s 00:11:50.075 sys 0m0.123s 00:11:50.075 ************************************ 00:11:50.075 END TEST nvme_e2edp 00:11:50.075 ************************************ 00:11:50.075 23:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.075 23:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:50.334 23:19:41 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:50.334 23:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:50.334 23:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.334 23:19:41 -- common/autotest_common.sh@10 -- # set +x 00:11:50.334 ************************************ 00:11:50.334 START TEST nvme_reserve 00:11:50.334 ************************************ 00:11:50.334 23:19:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:50.593 ===================================================== 00:11:50.593 NVMe Controller at PCI bus 0, device 6, function 0 00:11:50.593 ===================================================== 00:11:50.593 Reservations: Not Supported 00:11:50.593 ===================================================== 00:11:50.593 NVMe Controller at PCI bus 0, device 7, function 0 00:11:50.593 ===================================================== 00:11:50.593 Reservations: Not Supported 00:11:50.593 ===================================================== 00:11:50.593 NVMe Controller at PCI bus 0, device 9, function 0 00:11:50.593 ===================================================== 00:11:50.593 Reservations: Not Supported 00:11:50.593 ===================================================== 00:11:50.593 NVMe Controller at PCI bus 0, device 8, function 0 00:11:50.593 ===================================================== 00:11:50.593 Reservations: Not Supported 00:11:50.593 Reservation test passed 00:11:50.593 00:11:50.593 real 0m0.267s 00:11:50.593 user 0m0.094s 00:11:50.593 sys 0m0.137s 00:11:50.593 23:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.593 ************************************ 00:11:50.593 END TEST nvme_reserve 00:11:50.593 ************************************ 00:11:50.593 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:50.593 23:19:42 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:50.593 23:19:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:50.593 23:19:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.593 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:50.593 ************************************ 00:11:50.593 START TEST nvme_err_injection 00:11:50.593 ************************************ 00:11:50.593 23:19:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:50.852 NVMe Error Injection test 00:11:50.852 Attached to 0000:00:06.0 00:11:50.852 Attached to 0000:00:07.0 00:11:50.852 Attached to 0000:00:09.0 00:11:50.852 Attached to 0000:00:08.0 00:11:50.852 0000:00:08.0: get features failed as expected 00:11:50.852 0000:00:06.0: get features failed as expected 00:11:50.852 0000:00:07.0: get features failed as expected 00:11:50.852 0000:00:09.0: get features failed as expected 00:11:50.852 0000:00:06.0: get features successfully as expected 00:11:50.852 0000:00:07.0: get features successfully as expected 00:11:50.852 0000:00:09.0: get features successfully as expected 00:11:50.852 0000:00:08.0: get features successfully as expected 00:11:50.852 0000:00:06.0: read failed as expected 00:11:50.852 0000:00:07.0: read failed as expected 00:11:50.852 0000:00:09.0: read failed as expected 00:11:50.853 0000:00:08.0: read failed as expected 00:11:50.853 0000:00:06.0: read successfully as expected 00:11:50.853 0000:00:07.0: read successfully as expected 00:11:50.853 0000:00:09.0: read successfully as expected 00:11:50.853 0000:00:08.0: read successfully as expected 00:11:50.853 Cleaning up... 00:11:50.853 00:11:50.853 real 0m0.334s 00:11:50.853 user 0m0.150s 00:11:50.853 sys 0m0.140s 00:11:50.853 23:19:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.853 ************************************ 00:11:50.853 END TEST nvme_err_injection 00:11:50.853 ************************************ 00:11:50.853 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:50.853 23:19:42 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:50.853 23:19:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:50.853 23:19:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.853 23:19:42 -- common/autotest_common.sh@10 -- # set +x 00:11:50.853 ************************************ 00:11:50.853 START TEST nvme_overhead 00:11:50.853 ************************************ 00:11:50.853 23:19:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:52.229 Initializing NVMe Controllers 00:11:52.229 Attached to 0000:00:06.0 00:11:52.229 Attached to 0000:00:07.0 00:11:52.229 Attached to 0000:00:09.0 00:11:52.229 Attached to 0000:00:08.0 00:11:52.229 Initialization complete. Launching workers. 00:11:52.229 submit (in ns) avg, min, max = 13147.8, 12027.3, 106647.4 00:11:52.229 complete (in ns) avg, min, max = 9861.2, 8416.1, 47005.6 00:11:52.229 00:11:52.229 Submit histogram 00:11:52.229 ================ 00:11:52.229 Range in us Cumulative Count 00:11:52.229 11.978 - 12.029: 0.0181% ( 1) 00:11:52.229 12.029 - 12.080: 0.0362% ( 1) 00:11:52.229 12.080 - 12.132: 0.0905% ( 3) 00:11:52.229 12.132 - 12.183: 0.2715% ( 10) 00:11:52.229 12.183 - 12.235: 1.1584% ( 49) 00:11:52.229 12.235 - 12.286: 3.3122% ( 119) 00:11:52.229 12.286 - 12.337: 6.1176% ( 155) 00:11:52.229 12.337 - 12.389: 9.0679% ( 163) 00:11:52.229 12.389 - 12.440: 12.2353% ( 175) 00:11:52.229 12.440 - 12.492: 16.3620% ( 228) 00:11:52.229 12.492 - 12.543: 21.3394% ( 275) 00:11:52.229 12.543 - 12.594: 27.6561% ( 349) 00:11:52.229 12.594 - 12.646: 33.2670% ( 310) 00:11:52.229 12.646 - 12.697: 38.4072% ( 284) 00:11:52.229 12.697 - 12.749: 44.1267% ( 316) 00:11:52.229 12.749 - 12.800: 49.9729% ( 323) 00:11:52.229 12.800 - 12.851: 56.3258% ( 351) 00:11:52.229 12.851 - 12.903: 62.2624% ( 328) 00:11:52.229 12.903 - 12.954: 67.9638% ( 315) 00:11:52.229 12.954 - 13.006: 72.7421% ( 264) 00:11:52.229 13.006 - 13.057: 76.9412% ( 232) 00:11:52.229 13.057 - 13.108: 80.1086% ( 175) 00:11:52.229 13.108 - 13.160: 83.1312% ( 167) 00:11:52.229 13.160 - 13.263: 87.2579% ( 228) 00:11:52.229 13.263 - 13.365: 90.2986% ( 168) 00:11:52.229 13.365 - 13.468: 91.9457% ( 91) 00:11:52.229 13.468 - 13.571: 93.0679% ( 62) 00:11:52.229 13.571 - 13.674: 93.7557% ( 38) 00:11:52.229 13.674 - 13.777: 94.1357% ( 21) 00:11:52.229 13.777 - 13.880: 94.3529% ( 12) 00:11:52.229 13.880 - 13.982: 94.4615% ( 6) 00:11:52.229 13.982 - 14.085: 94.4796% ( 1) 00:11:52.229 14.291 - 14.394: 94.5158% ( 2) 00:11:52.229 14.496 - 14.599: 94.5339% ( 1) 00:11:52.229 14.702 - 14.805: 94.5520% ( 1) 00:11:52.229 14.908 - 15.010: 94.5701% ( 1) 00:11:52.229 15.422 - 15.524: 94.5882% ( 1) 00:11:52.229 15.524 - 15.627: 94.6063% ( 1) 00:11:52.229 15.730 - 15.833: 94.6244% ( 1) 00:11:52.229 15.833 - 15.936: 94.6425% ( 1) 00:11:52.229 15.936 - 16.039: 94.6787% ( 2) 00:11:52.229 16.039 - 16.141: 94.7692% ( 5) 00:11:52.229 16.141 - 16.244: 94.8235% ( 3) 00:11:52.229 16.244 - 16.347: 94.8597% ( 2) 00:11:52.229 16.347 - 16.450: 94.9683% ( 6) 00:11:52.229 16.450 - 16.553: 95.1855% ( 12) 00:11:52.229 16.553 - 16.655: 95.4208% ( 13) 00:11:52.229 16.655 - 16.758: 95.6923% ( 15) 00:11:52.229 16.758 - 16.861: 95.9457% ( 14) 00:11:52.229 16.861 - 16.964: 96.2715% ( 18) 00:11:52.229 16.964 - 17.067: 96.5611% ( 16) 00:11:52.229 17.067 - 17.169: 96.6697% ( 6) 00:11:52.229 17.169 - 17.272: 96.9231% ( 14) 00:11:52.229 17.272 - 17.375: 97.0498% ( 7) 00:11:52.229 17.375 - 17.478: 97.1403% ( 5) 00:11:52.229 17.478 - 17.581: 97.2489% ( 6) 00:11:52.229 17.581 - 17.684: 97.3394% ( 5) 00:11:52.229 17.684 - 17.786: 97.5023% ( 9) 00:11:52.229 17.786 - 17.889: 97.6471% ( 8) 00:11:52.229 17.889 - 17.992: 97.7557% ( 6) 00:11:52.229 17.992 - 18.095: 97.8643% ( 6) 00:11:52.229 18.095 - 18.198: 97.9729% ( 6) 00:11:52.229 18.198 - 18.300: 98.1176% ( 8) 00:11:52.229 18.300 - 18.403: 98.1900% ( 4) 00:11:52.229 18.403 - 18.506: 98.3167% ( 7) 00:11:52.229 18.506 - 18.609: 98.3529% ( 2) 00:11:52.229 18.609 - 18.712: 98.4434% ( 5) 00:11:52.229 18.712 - 18.814: 98.4977% ( 3) 00:11:52.229 18.814 - 18.917: 98.5520% ( 3) 00:11:52.229 19.020 - 19.123: 98.6244% ( 4) 00:11:52.229 19.123 - 19.226: 98.6787% ( 3) 00:11:52.229 19.226 - 19.329: 98.7692% ( 5) 00:11:52.229 19.329 - 19.431: 98.8235% ( 3) 00:11:52.229 19.431 - 19.534: 98.8959% ( 4) 00:11:52.229 19.534 - 19.637: 98.9321% ( 2) 00:11:52.229 19.637 - 19.740: 98.9683% ( 2) 00:11:52.229 19.740 - 19.843: 99.0769% ( 6) 00:11:52.229 19.945 - 20.048: 99.1131% ( 2) 00:11:52.229 20.048 - 20.151: 99.1312% ( 1) 00:11:52.229 20.151 - 20.254: 99.1855% ( 3) 00:11:52.229 20.562 - 20.665: 99.2398% ( 3) 00:11:52.229 20.768 - 20.871: 99.2760% ( 2) 00:11:52.229 20.871 - 20.973: 99.3122% ( 2) 00:11:52.229 21.282 - 21.385: 99.3484% ( 2) 00:11:52.229 22.002 - 22.104: 99.3665% ( 1) 00:11:52.229 22.104 - 22.207: 99.3846% ( 1) 00:11:52.229 22.207 - 22.310: 99.4027% ( 1) 00:11:52.229 22.516 - 22.618: 99.4208% ( 1) 00:11:52.229 23.030 - 23.133: 99.4389% ( 1) 00:11:52.229 23.235 - 23.338: 99.4751% ( 2) 00:11:52.229 23.338 - 23.441: 99.5475% ( 4) 00:11:52.229 23.544 - 23.647: 99.6018% ( 3) 00:11:52.229 23.955 - 24.058: 99.6199% ( 1) 00:11:52.229 24.058 - 24.161: 99.6380% ( 1) 00:11:52.229 24.263 - 24.366: 99.6561% ( 1) 00:11:52.229 24.366 - 24.469: 99.6742% ( 1) 00:11:52.229 24.572 - 24.675: 99.6923% ( 1) 00:11:52.229 24.983 - 25.086: 99.7285% ( 2) 00:11:52.229 25.189 - 25.292: 99.7466% ( 1) 00:11:52.229 25.600 - 25.703: 99.7647% ( 1) 00:11:52.229 25.908 - 26.011: 99.7828% ( 1) 00:11:52.229 26.217 - 26.320: 99.8009% ( 1) 00:11:52.229 28.993 - 29.198: 99.8190% ( 1) 00:11:52.229 29.815 - 30.021: 99.8371% ( 1) 00:11:52.229 31.049 - 31.255: 99.8552% ( 1) 00:11:52.229 31.460 - 31.666: 99.8733% ( 1) 00:11:52.229 31.871 - 32.077: 99.8914% ( 1) 00:11:52.230 33.722 - 33.928: 99.9095% ( 1) 00:11:52.230 35.778 - 35.984: 99.9276% ( 1) 00:11:52.230 53.462 - 53.873: 99.9457% ( 1) 00:11:52.230 56.341 - 56.752: 99.9638% ( 1) 00:11:52.230 95.409 - 95.820: 99.9819% ( 1) 00:11:52.230 106.101 - 106.924: 100.0000% ( 1) 00:11:52.230 00:11:52.230 Complete histogram 00:11:52.230 ================== 00:11:52.230 Range in us Cumulative Count 00:11:52.230 8.379 - 8.431: 0.0543% ( 3) 00:11:52.230 8.431 - 8.482: 0.4344% ( 21) 00:11:52.230 8.482 - 8.533: 1.6471% ( 67) 00:11:52.230 8.533 - 8.585: 3.9276% ( 126) 00:11:52.230 8.585 - 8.636: 5.9548% ( 112) 00:11:52.230 8.636 - 8.688: 8.7602% ( 155) 00:11:52.230 8.688 - 8.739: 13.4299% ( 258) 00:11:52.230 8.739 - 8.790: 19.5294% ( 337) 00:11:52.230 8.790 - 8.842: 26.2986% ( 374) 00:11:52.230 8.842 - 8.893: 33.3032% ( 387) 00:11:52.230 8.893 - 8.945: 40.0181% ( 371) 00:11:52.230 8.945 - 8.996: 47.4208% ( 409) 00:11:52.230 8.996 - 9.047: 54.1538% ( 372) 00:11:52.230 9.047 - 9.099: 60.7421% ( 364) 00:11:52.230 9.099 - 9.150: 66.9321% ( 342) 00:11:52.230 9.150 - 9.202: 71.4027% ( 247) 00:11:52.230 9.202 - 9.253: 75.9457% ( 251) 00:11:52.230 9.253 - 9.304: 79.0407% ( 171) 00:11:52.230 9.304 - 9.356: 81.7014% ( 147) 00:11:52.230 9.356 - 9.407: 83.4027% ( 94) 00:11:52.230 9.407 - 9.459: 85.0498% ( 91) 00:11:52.230 9.459 - 9.510: 86.5520% ( 83) 00:11:52.230 9.510 - 9.561: 87.5837% ( 57) 00:11:52.230 9.561 - 9.613: 88.3439% ( 42) 00:11:52.230 9.613 - 9.664: 89.0860% ( 41) 00:11:52.230 9.664 - 9.716: 89.6471% ( 31) 00:11:52.230 9.716 - 9.767: 89.9910% ( 19) 00:11:52.230 9.767 - 9.818: 90.0995% ( 6) 00:11:52.230 9.818 - 9.870: 90.2624% ( 9) 00:11:52.230 9.870 - 9.921: 90.4072% ( 8) 00:11:52.230 9.973 - 10.024: 90.4253% ( 1) 00:11:52.230 10.024 - 10.076: 90.4615% ( 2) 00:11:52.230 10.076 - 10.127: 90.5158% ( 3) 00:11:52.230 10.178 - 10.230: 90.5339% ( 1) 00:11:52.230 10.230 - 10.281: 90.5520% ( 1) 00:11:52.230 10.281 - 10.333: 90.6063% ( 3) 00:11:52.230 10.384 - 10.435: 90.6244% ( 1) 00:11:52.230 10.641 - 10.692: 90.6425% ( 1) 00:11:52.230 10.898 - 10.949: 90.6606% ( 1) 00:11:52.230 11.052 - 11.104: 90.6787% ( 1) 00:11:52.230 11.104 - 11.155: 90.6968% ( 1) 00:11:52.230 11.206 - 11.258: 90.7330% ( 2) 00:11:52.230 11.309 - 11.361: 90.7692% ( 2) 00:11:52.230 11.361 - 11.412: 90.7873% ( 1) 00:11:52.230 11.463 - 11.515: 90.8054% ( 1) 00:11:52.230 11.515 - 11.566: 90.8416% ( 2) 00:11:52.230 12.389 - 12.440: 90.8597% ( 1) 00:11:52.230 12.594 - 12.646: 90.8778% ( 1) 00:11:52.230 12.800 - 12.851: 90.8959% ( 1) 00:11:52.230 13.160 - 13.263: 90.9140% ( 1) 00:11:52.230 13.263 - 13.365: 90.9321% ( 1) 00:11:52.230 13.468 - 13.571: 90.9502% ( 1) 00:11:52.230 13.571 - 13.674: 90.9683% ( 1) 00:11:52.230 13.674 - 13.777: 91.0045% ( 2) 00:11:52.230 13.777 - 13.880: 91.0226% ( 1) 00:11:52.230 13.880 - 13.982: 91.0407% ( 1) 00:11:52.230 13.982 - 14.085: 91.1131% ( 4) 00:11:52.230 14.085 - 14.188: 91.1312% ( 1) 00:11:52.230 14.188 - 14.291: 91.2579% ( 7) 00:11:52.230 14.291 - 14.394: 91.4751% ( 12) 00:11:52.230 14.394 - 14.496: 91.6380% ( 9) 00:11:52.230 14.496 - 14.599: 91.6923% ( 3) 00:11:52.230 14.599 - 14.702: 91.7828% ( 5) 00:11:52.230 14.702 - 14.805: 91.8371% ( 3) 00:11:52.230 14.805 - 14.908: 91.8914% ( 3) 00:11:52.230 15.113 - 15.216: 92.0000% ( 6) 00:11:52.230 15.216 - 15.319: 92.0724% ( 4) 00:11:52.230 15.319 - 15.422: 92.1086% ( 2) 00:11:52.230 15.422 - 15.524: 92.1267% ( 1) 00:11:52.230 15.524 - 15.627: 92.1448% ( 1) 00:11:52.230 15.627 - 15.730: 92.1629% ( 1) 00:11:52.230 15.833 - 15.936: 92.1810% ( 1) 00:11:52.230 15.936 - 16.039: 92.1991% ( 1) 00:11:52.230 16.141 - 16.244: 92.2172% ( 1) 00:11:52.230 16.450 - 16.553: 92.2896% ( 4) 00:11:52.230 16.553 - 16.655: 92.3439% ( 3) 00:11:52.230 16.655 - 16.758: 92.4706% ( 7) 00:11:52.230 16.758 - 16.861: 92.7240% ( 14) 00:11:52.230 16.861 - 16.964: 93.0136% ( 16) 00:11:52.230 16.964 - 17.067: 93.3756% ( 20) 00:11:52.230 17.067 - 17.169: 93.7919% ( 23) 00:11:52.230 17.169 - 17.272: 94.2081% ( 23) 00:11:52.230 17.272 - 17.375: 94.5701% ( 20) 00:11:52.230 17.375 - 17.478: 95.0407% ( 26) 00:11:52.230 17.478 - 17.581: 95.3665% ( 18) 00:11:52.230 17.581 - 17.684: 95.8190% ( 25) 00:11:52.230 17.684 - 17.786: 96.0362% ( 12) 00:11:52.230 17.786 - 17.889: 96.1810% ( 8) 00:11:52.230 17.889 - 17.992: 96.5249% ( 19) 00:11:52.230 17.992 - 18.095: 96.8326% ( 17) 00:11:52.230 18.095 - 18.198: 97.0860% ( 14) 00:11:52.230 18.198 - 18.300: 97.3032% ( 12) 00:11:52.230 18.300 - 18.403: 97.3937% ( 5) 00:11:52.230 18.403 - 18.506: 97.5204% ( 7) 00:11:52.230 18.506 - 18.609: 97.6290% ( 6) 00:11:52.230 18.609 - 18.712: 97.7195% ( 5) 00:11:52.230 18.712 - 18.814: 97.7557% ( 2) 00:11:52.230 18.814 - 18.917: 97.8824% ( 7) 00:11:52.230 18.917 - 19.020: 97.9367% ( 3) 00:11:52.230 19.020 - 19.123: 97.9729% ( 2) 00:11:52.230 19.123 - 19.226: 98.0271% ( 3) 00:11:52.230 19.226 - 19.329: 98.0633% ( 2) 00:11:52.230 19.329 - 19.431: 98.1176% ( 3) 00:11:52.230 19.431 - 19.534: 98.2081% ( 5) 00:11:52.230 19.534 - 19.637: 98.2986% ( 5) 00:11:52.230 19.637 - 19.740: 98.4615% ( 9) 00:11:52.230 19.740 - 19.843: 98.5520% ( 5) 00:11:52.230 19.843 - 19.945: 98.7692% ( 12) 00:11:52.230 19.945 - 20.048: 98.8054% ( 2) 00:11:52.230 20.048 - 20.151: 98.9140% ( 6) 00:11:52.230 20.151 - 20.254: 99.0226% ( 6) 00:11:52.230 20.254 - 20.357: 99.1674% ( 8) 00:11:52.230 20.357 - 20.459: 99.2579% ( 5) 00:11:52.230 20.459 - 20.562: 99.2760% ( 1) 00:11:52.230 20.562 - 20.665: 99.3122% ( 2) 00:11:52.230 21.179 - 21.282: 99.3303% ( 1) 00:11:52.230 21.796 - 21.899: 99.3484% ( 1) 00:11:52.230 22.310 - 22.413: 99.3665% ( 1) 00:11:52.230 22.516 - 22.618: 99.3846% ( 1) 00:11:52.230 22.618 - 22.721: 99.4027% ( 1) 00:11:52.230 22.721 - 22.824: 99.4208% ( 1) 00:11:52.230 22.824 - 22.927: 99.4389% ( 1) 00:11:52.230 22.927 - 23.030: 99.4570% ( 1) 00:11:52.230 23.441 - 23.544: 99.4751% ( 1) 00:11:52.230 23.749 - 23.852: 99.5113% ( 2) 00:11:52.230 23.955 - 24.058: 99.5294% ( 1) 00:11:52.230 24.263 - 24.366: 99.5475% ( 1) 00:11:52.230 24.366 - 24.469: 99.5656% ( 1) 00:11:52.230 24.469 - 24.572: 99.6018% ( 2) 00:11:52.230 25.292 - 25.394: 99.6199% ( 1) 00:11:52.230 25.394 - 25.497: 99.6380% ( 1) 00:11:52.230 25.497 - 25.600: 99.6561% ( 1) 00:11:52.230 25.600 - 25.703: 99.6742% ( 1) 00:11:52.230 25.703 - 25.806: 99.6923% ( 1) 00:11:52.230 25.806 - 25.908: 99.7104% ( 1) 00:11:52.230 25.908 - 26.011: 99.7285% ( 1) 00:11:52.230 26.320 - 26.525: 99.7466% ( 1) 00:11:52.230 26.525 - 26.731: 99.7647% ( 1) 00:11:52.230 26.937 - 27.142: 99.7828% ( 1) 00:11:52.230 27.142 - 27.348: 99.8009% ( 1) 00:11:52.230 30.432 - 30.638: 99.8190% ( 1) 00:11:52.230 33.722 - 33.928: 99.8371% ( 1) 00:11:52.230 34.133 - 34.339: 99.8733% ( 2) 00:11:52.230 35.984 - 36.190: 99.8914% ( 1) 00:11:52.230 40.508 - 40.713: 99.9095% ( 1) 00:11:52.230 43.386 - 43.592: 99.9276% ( 1) 00:11:52.230 44.414 - 44.620: 99.9457% ( 1) 00:11:52.230 44.826 - 45.031: 99.9819% ( 2) 00:11:52.230 46.882 - 47.088: 100.0000% ( 1) 00:11:52.230 00:11:52.230 00:11:52.230 real 0m1.315s 00:11:52.230 user 0m1.112s 00:11:52.230 sys 0m0.155s 00:11:52.230 23:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.230 ************************************ 00:11:52.230 END TEST nvme_overhead 00:11:52.230 23:19:43 -- common/autotest_common.sh@10 -- # set +x 00:11:52.230 ************************************ 00:11:52.230 23:19:43 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:52.230 23:19:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:52.230 23:19:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.230 23:19:43 -- common/autotest_common.sh@10 -- # set +x 00:11:52.489 ************************************ 00:11:52.489 START TEST nvme_arbitration 00:11:52.489 ************************************ 00:11:52.489 23:19:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:55.779 Initializing NVMe Controllers 00:11:55.779 Attached to 0000:00:06.0 00:11:55.779 Attached to 0000:00:07.0 00:11:55.779 Attached to 0000:00:09.0 00:11:55.779 Attached to 0000:00:08.0 00:11:55.779 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:55.779 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:55.779 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:55.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:55.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:55.779 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:55.779 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:55.779 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:55.779 Initialization complete. Launching workers. 00:11:55.779 Starting thread on core 1 with urgent priority queue 00:11:55.779 Starting thread on core 2 with urgent priority queue 00:11:55.779 Starting thread on core 3 with urgent priority queue 00:11:55.779 Starting thread on core 0 with urgent priority queue 00:11:55.780 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:11:55.780 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:11:55.780 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:55.780 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:55.780 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:11:55.780 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:11:55.780 ======================================================== 00:11:55.780 00:11:55.780 00:11:55.780 real 0m3.495s 00:11:55.780 user 0m9.602s 00:11:55.780 sys 0m0.155s 00:11:55.780 23:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.780 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:55.780 ************************************ 00:11:55.780 END TEST nvme_arbitration 00:11:55.780 ************************************ 00:11:56.039 23:19:47 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:56.039 23:19:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:56.039 23:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.039 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:56.039 ************************************ 00:11:56.039 START TEST nvme_single_aen 00:11:56.039 ************************************ 00:11:56.039 23:19:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:56.039 [2024-07-26 23:19:47.614293] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:56.039 [2024-07-26 23:19:47.614356] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.039 [2024-07-26 23:19:47.763593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:56.039 [2024-07-26 23:19:47.765332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:56.039 [2024-07-26 23:19:47.766755] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:56.039 [2024-07-26 23:19:47.768159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:56.299 Asynchronous Event Request test 00:11:56.299 Attached to 0000:00:06.0 00:11:56.299 Attached to 0000:00:07.0 00:11:56.299 Attached to 0000:00:09.0 00:11:56.299 Attached to 0000:00:08.0 00:11:56.299 Reset controller to setup AER completions for this process 00:11:56.299 Registering asynchronous event callbacks... 00:11:56.299 Getting orig temperature thresholds of all controllers 00:11:56.299 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.299 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.299 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.299 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:56.299 Setting all controllers temperature threshold low to trigger AER 00:11:56.299 Waiting for all controllers temperature threshold to be set lower 00:11:56.299 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.299 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:56.299 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.299 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:56.299 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.299 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:56.299 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:56.299 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:56.299 Waiting for all controllers to trigger AER and reset threshold 00:11:56.299 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.299 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.299 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.299 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.299 Cleaning up... 00:11:56.299 00:11:56.299 real 0m0.251s 00:11:56.299 user 0m0.088s 00:11:56.299 sys 0m0.124s 00:11:56.299 23:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.299 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:56.299 ************************************ 00:11:56.299 END TEST nvme_single_aen 00:11:56.299 ************************************ 00:11:56.299 23:19:47 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:56.299 23:19:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:56.299 23:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:56.299 23:19:47 -- common/autotest_common.sh@10 -- # set +x 00:11:56.299 ************************************ 00:11:56.299 START TEST nvme_doorbell_aers 00:11:56.299 ************************************ 00:11:56.299 23:19:47 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:56.299 23:19:47 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:56.299 23:19:47 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:56.299 23:19:47 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:56.299 23:19:47 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:56.299 23:19:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:56.299 23:19:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:56.299 23:19:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:56.299 23:19:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:56.299 23:19:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:56.299 23:19:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:56.299 23:19:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:56.299 23:19:47 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:56.299 23:19:47 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:56.558 [2024-07-26 23:19:48.278579] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:06.550 Executing: test_write_invalid_db 00:12:06.550 Waiting for AER completion... 00:12:06.550 Failure: test_write_invalid_db 00:12:06.550 00:12:06.550 Executing: test_invalid_db_write_overflow_sq 00:12:06.550 Waiting for AER completion... 00:12:06.550 Failure: test_invalid_db_write_overflow_sq 00:12:06.550 00:12:06.550 Executing: test_invalid_db_write_overflow_cq 00:12:06.550 Waiting for AER completion... 00:12:06.550 Failure: test_invalid_db_write_overflow_cq 00:12:06.550 00:12:06.550 23:19:58 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:06.550 23:19:58 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:06.809 [2024-07-26 23:19:58.337721] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:16.795 Executing: test_write_invalid_db 00:12:16.795 Waiting for AER completion... 00:12:16.795 Failure: test_write_invalid_db 00:12:16.795 00:12:16.795 Executing: test_invalid_db_write_overflow_sq 00:12:16.795 Waiting for AER completion... 00:12:16.795 Failure: test_invalid_db_write_overflow_sq 00:12:16.795 00:12:16.795 Executing: test_invalid_db_write_overflow_cq 00:12:16.795 Waiting for AER completion... 00:12:16.795 Failure: test_invalid_db_write_overflow_cq 00:12:16.795 00:12:16.795 23:20:08 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:16.795 23:20:08 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:16.795 [2024-07-26 23:20:08.389447] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:26.789 Executing: test_write_invalid_db 00:12:26.789 Waiting for AER completion... 00:12:26.789 Failure: test_write_invalid_db 00:12:26.789 00:12:26.789 Executing: test_invalid_db_write_overflow_sq 00:12:26.789 Waiting for AER completion... 00:12:26.789 Failure: test_invalid_db_write_overflow_sq 00:12:26.789 00:12:26.789 Executing: test_invalid_db_write_overflow_cq 00:12:26.789 Waiting for AER completion... 00:12:26.789 Failure: test_invalid_db_write_overflow_cq 00:12:26.789 00:12:26.789 23:20:18 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:26.789 23:20:18 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:26.790 [2024-07-26 23:20:18.435725] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 Executing: test_write_invalid_db 00:12:36.770 Waiting for AER completion... 00:12:36.770 Failure: test_write_invalid_db 00:12:36.770 00:12:36.770 Executing: test_invalid_db_write_overflow_sq 00:12:36.770 Waiting for AER completion... 00:12:36.770 Failure: test_invalid_db_write_overflow_sq 00:12:36.770 00:12:36.770 Executing: test_invalid_db_write_overflow_cq 00:12:36.770 Waiting for AER completion... 00:12:36.770 Failure: test_invalid_db_write_overflow_cq 00:12:36.770 00:12:36.770 00:12:36.770 real 0m40.314s 00:12:36.770 user 0m29.701s 00:12:36.770 sys 0m10.254s 00:12:36.770 23:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.770 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:12:36.770 ************************************ 00:12:36.770 END TEST nvme_doorbell_aers 00:12:36.770 ************************************ 00:12:36.770 23:20:28 -- nvme/nvme.sh@97 -- # uname 00:12:36.770 23:20:28 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:36.770 23:20:28 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:36.770 23:20:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:36.770 23:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.770 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:12:36.770 ************************************ 00:12:36.770 START TEST nvme_multi_aen 00:12:36.770 ************************************ 00:12:36.770 23:20:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:36.770 [2024-07-26 23:20:28.335156] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:36.770 [2024-07-26 23:20:28.335227] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.770 [2024-07-26 23:20:28.503315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:36.770 [2024-07-26 23:20:28.503386] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.503416] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.503432] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.505154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:12:36.770 [2024-07-26 23:20:28.505185] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.505214] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.505230] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.506539] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:12:36.770 [2024-07-26 23:20:28.506567] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.506591] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.506607] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.770 [2024-07-26 23:20:28.507990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:12:36.770 [2024-07-26 23:20:28.508016] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.771 [2024-07-26 23:20:28.508038] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.771 [2024-07-26 23:20:28.508053] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65127) is not found. Dropping the request. 00:12:36.771 [2024-07-26 23:20:28.521015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:36.771 [2024-07-26 23:20:28.521230] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.771 Child process pid: 65637 00:12:37.029 [Child] Asynchronous Event Request test 00:12:37.029 [Child] Attached to 0000:00:06.0 00:12:37.029 [Child] Attached to 0000:00:07.0 00:12:37.029 [Child] Attached to 0000:00:09.0 00:12:37.029 [Child] Attached to 0000:00:08.0 00:12:37.029 [Child] Registering asynchronous event callbacks... 00:12:37.029 [Child] Getting orig temperature thresholds of all controllers 00:12:37.029 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.029 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.029 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.029 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.029 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:37.029 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.029 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.029 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.029 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.029 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.029 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.029 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.029 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.029 [Child] Cleaning up... 00:12:37.288 Asynchronous Event Request test 00:12:37.288 Attached to 0000:00:06.0 00:12:37.288 Attached to 0000:00:07.0 00:12:37.288 Attached to 0000:00:09.0 00:12:37.288 Attached to 0000:00:08.0 00:12:37.288 Reset controller to setup AER completions for this process 00:12:37.288 Registering asynchronous event callbacks... 00:12:37.288 Getting orig temperature thresholds of all controllers 00:12:37.288 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.288 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.288 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.288 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:37.288 Setting all controllers temperature threshold low to trigger AER 00:12:37.288 Waiting for all controllers temperature threshold to be set lower 00:12:37.288 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.288 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:12:37.288 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.288 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:12:37.288 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.288 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:12:37.288 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:37.289 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:12:37.289 Waiting for all controllers to trigger AER and reset threshold 00:12:37.289 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.289 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.289 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.289 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:37.289 Cleaning up... 00:12:37.289 00:12:37.289 real 0m0.582s 00:12:37.289 user 0m0.188s 00:12:37.289 sys 0m0.278s 00:12:37.289 23:20:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.289 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:12:37.289 ************************************ 00:12:37.289 END TEST nvme_multi_aen 00:12:37.289 ************************************ 00:12:37.289 23:20:28 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:37.289 23:20:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:37.289 23:20:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.289 23:20:28 -- common/autotest_common.sh@10 -- # set +x 00:12:37.289 ************************************ 00:12:37.289 START TEST nvme_startup 00:12:37.289 ************************************ 00:12:37.289 23:20:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:37.548 Initializing NVMe Controllers 00:12:37.548 Attached to 0000:00:06.0 00:12:37.548 Attached to 0000:00:07.0 00:12:37.548 Attached to 0000:00:09.0 00:12:37.548 Attached to 0000:00:08.0 00:12:37.548 Initialization complete. 00:12:37.548 Time used:155609.609 (us). 00:12:37.548 00:12:37.548 real 0m0.249s 00:12:37.548 user 0m0.076s 00:12:37.548 sys 0m0.133s 00:12:37.548 23:20:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.548 23:20:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.548 ************************************ 00:12:37.548 END TEST nvme_startup 00:12:37.548 ************************************ 00:12:37.548 23:20:29 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:37.548 23:20:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:37.548 23:20:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:37.548 23:20:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.548 ************************************ 00:12:37.548 START TEST nvme_multi_secondary 00:12:37.548 ************************************ 00:12:37.548 23:20:29 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:12:37.548 23:20:29 -- nvme/nvme.sh@52 -- # pid0=65693 00:12:37.548 23:20:29 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:37.548 23:20:29 -- nvme/nvme.sh@54 -- # pid1=65694 00:12:37.548 23:20:29 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:37.548 23:20:29 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:41.732 Initializing NVMe Controllers 00:12:41.732 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:41.732 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:41.732 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:41.732 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:41.732 Initialization complete. Launching workers. 00:12:41.732 ======================================================== 00:12:41.732 Latency(us) 00:12:41.732 Device Information : IOPS MiB/s Average min max 00:12:41.732 PCIE (0000:00:06.0) NSID 1 from core 1: 5360.04 20.94 2982.97 1027.23 7596.88 00:12:41.732 PCIE (0000:00:07.0) NSID 1 from core 1: 5360.04 20.94 2984.69 1057.45 8863.25 00:12:41.732 PCIE (0000:00:09.0) NSID 1 from core 1: 5360.04 20.94 2984.77 1026.94 9581.72 00:12:41.732 PCIE (0000:00:08.0) NSID 1 from core 1: 5360.04 20.94 2984.89 1039.42 9448.44 00:12:41.732 PCIE (0000:00:08.0) NSID 2 from core 1: 5360.04 20.94 2985.02 1034.68 9244.60 00:12:41.732 PCIE (0000:00:08.0) NSID 3 from core 1: 5365.37 20.96 2982.13 1041.55 8262.86 00:12:41.732 ======================================================== 00:12:41.732 Total : 32165.58 125.65 2984.08 1026.94 9581.72 00:12:41.732 00:12:41.732 Initializing NVMe Controllers 00:12:41.732 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:41.732 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:41.732 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:41.732 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:41.732 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:41.732 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:41.732 Initialization complete. Launching workers. 00:12:41.732 ======================================================== 00:12:41.732 Latency(us) 00:12:41.732 Device Information : IOPS MiB/s Average min max 00:12:41.732 PCIE (0000:00:06.0) NSID 1 from core 2: 3287.31 12.84 4866.14 1155.14 14291.48 00:12:41.732 PCIE (0000:00:07.0) NSID 1 from core 2: 3287.31 12.84 4867.05 1034.68 14036.09 00:12:41.732 PCIE (0000:00:09.0) NSID 1 from core 2: 3287.31 12.84 4867.00 866.16 13648.69 00:12:41.732 PCIE (0000:00:08.0) NSID 1 from core 2: 3287.31 12.84 4866.87 1063.64 17967.11 00:12:41.732 PCIE (0000:00:08.0) NSID 2 from core 2: 3287.31 12.84 4866.97 1154.71 18046.40 00:12:41.732 PCIE (0000:00:08.0) NSID 3 from core 2: 3287.31 12.84 4866.84 1100.78 18369.42 00:12:41.732 ======================================================== 00:12:41.732 Total : 19723.88 77.05 4866.81 866.16 18369.42 00:12:41.732 00:12:41.732 23:20:33 -- nvme/nvme.sh@56 -- # wait 65693 00:12:43.139 Initializing NVMe Controllers 00:12:43.139 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:43.139 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:43.139 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:43.139 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:43.139 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:43.139 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:43.139 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:43.139 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:43.139 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:43.139 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:43.139 Initialization complete. Launching workers. 00:12:43.139 ======================================================== 00:12:43.139 Latency(us) 00:12:43.139 Device Information : IOPS MiB/s Average min max 00:12:43.139 PCIE (0000:00:06.0) NSID 1 from core 0: 8645.84 33.77 1849.20 919.84 8360.25 00:12:43.139 PCIE (0000:00:07.0) NSID 1 from core 0: 8645.84 33.77 1850.15 937.63 9158.47 00:12:43.139 PCIE (0000:00:09.0) NSID 1 from core 0: 8645.84 33.77 1850.13 874.80 8675.66 00:12:43.139 PCIE (0000:00:08.0) NSID 1 from core 0: 8645.84 33.77 1850.09 814.98 8664.97 00:12:43.139 PCIE (0000:00:08.0) NSID 2 from core 0: 8645.84 33.77 1850.07 775.08 8610.87 00:12:43.139 PCIE (0000:00:08.0) NSID 3 from core 0: 8645.84 33.77 1850.04 736.55 8356.20 00:12:43.139 ======================================================== 00:12:43.139 Total : 51875.03 202.64 1849.95 736.55 9158.47 00:12:43.139 00:12:43.139 23:20:34 -- nvme/nvme.sh@57 -- # wait 65694 00:12:43.139 23:20:34 -- nvme/nvme.sh@61 -- # pid0=65769 00:12:43.139 23:20:34 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:43.139 23:20:34 -- nvme/nvme.sh@63 -- # pid1=65770 00:12:43.139 23:20:34 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:43.139 23:20:34 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:47.331 Initializing NVMe Controllers 00:12:47.331 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:47.331 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:47.331 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:47.331 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:47.331 Initialization complete. Launching workers. 00:12:47.331 ======================================================== 00:12:47.331 Latency(us) 00:12:47.331 Device Information : IOPS MiB/s Average min max 00:12:47.331 PCIE (0000:00:06.0) NSID 1 from core 0: 4917.16 19.21 3251.51 1078.02 6154.14 00:12:47.331 PCIE (0000:00:07.0) NSID 1 from core 0: 4917.16 19.21 3253.60 1105.25 5933.64 00:12:47.331 PCIE (0000:00:09.0) NSID 1 from core 0: 4917.16 19.21 3254.22 1120.73 6922.40 00:12:47.331 PCIE (0000:00:08.0) NSID 1 from core 0: 4917.16 19.21 3254.80 1121.83 6657.96 00:12:47.331 PCIE (0000:00:08.0) NSID 2 from core 0: 4917.16 19.21 3255.01 1078.73 6500.55 00:12:47.331 PCIE (0000:00:08.0) NSID 3 from core 0: 4917.16 19.21 3255.35 1103.22 6680.13 00:12:47.331 ======================================================== 00:12:47.331 Total : 29502.98 115.25 3254.08 1078.02 6922.40 00:12:47.331 00:12:47.331 Initializing NVMe Controllers 00:12:47.331 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:47.331 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:47.331 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:47.331 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:47.331 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:47.331 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:47.331 Initialization complete. Launching workers. 00:12:47.331 ======================================================== 00:12:47.331 Latency(us) 00:12:47.331 Device Information : IOPS MiB/s Average min max 00:12:47.331 PCIE (0000:00:06.0) NSID 1 from core 1: 4985.55 19.47 3206.82 1054.78 6546.47 00:12:47.331 PCIE (0000:00:07.0) NSID 1 from core 1: 4985.55 19.47 3208.50 1098.04 6321.31 00:12:47.331 PCIE (0000:00:09.0) NSID 1 from core 1: 4985.55 19.47 3208.48 1032.29 6114.83 00:12:47.331 PCIE (0000:00:08.0) NSID 1 from core 1: 4985.55 19.47 3208.59 1093.87 6221.05 00:12:47.331 PCIE (0000:00:08.0) NSID 2 from core 1: 4985.55 19.47 3208.56 1084.64 6418.42 00:12:47.331 PCIE (0000:00:08.0) NSID 3 from core 1: 4985.55 19.47 3208.52 1094.68 6122.67 00:12:47.331 ======================================================== 00:12:47.331 Total : 29913.33 116.85 3208.25 1032.29 6546.47 00:12:47.331 00:12:48.711 Initializing NVMe Controllers 00:12:48.711 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:48.711 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:48.711 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:48.711 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:48.711 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:48.711 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:48.711 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:48.711 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:48.711 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:48.711 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:48.711 Initialization complete. Launching workers. 00:12:48.711 ======================================================== 00:12:48.711 Latency(us) 00:12:48.711 Device Information : IOPS MiB/s Average min max 00:12:48.711 PCIE (0000:00:06.0) NSID 1 from core 2: 3110.13 12.15 5143.20 1144.69 16857.36 00:12:48.711 PCIE (0000:00:07.0) NSID 1 from core 2: 3110.13 12.15 5144.05 1138.44 12166.31 00:12:48.711 PCIE (0000:00:09.0) NSID 1 from core 2: 3110.13 12.15 5143.95 1145.34 11957.40 00:12:48.711 PCIE (0000:00:08.0) NSID 1 from core 2: 3110.13 12.15 5144.12 1149.38 12134.37 00:12:48.711 PCIE (0000:00:08.0) NSID 2 from core 2: 3110.13 12.15 5143.27 1185.95 11282.32 00:12:48.711 PCIE (0000:00:08.0) NSID 3 from core 2: 3113.32 12.16 5138.43 1051.76 11431.25 00:12:48.711 ======================================================== 00:12:48.711 Total : 18663.95 72.91 5142.84 1051.76 16857.36 00:12:48.711 00:12:48.711 23:20:40 -- nvme/nvme.sh@65 -- # wait 65769 00:12:48.711 23:20:40 -- nvme/nvme.sh@66 -- # wait 65770 00:12:48.711 00:12:48.711 real 0m11.068s 00:12:48.711 user 0m19.026s 00:12:48.711 sys 0m1.036s 00:12:48.711 23:20:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.711 23:20:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.711 ************************************ 00:12:48.711 END TEST nvme_multi_secondary 00:12:48.711 ************************************ 00:12:48.711 23:20:40 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:48.711 23:20:40 -- nvme/nvme.sh@102 -- # kill_stub 00:12:48.711 23:20:40 -- common/autotest_common.sh@1065 -- # [[ -e /proc/64695 ]] 00:12:48.711 23:20:40 -- common/autotest_common.sh@1066 -- # kill 64695 00:12:48.711 23:20:40 -- common/autotest_common.sh@1067 -- # wait 64695 00:12:48.971 [2024-07-26 23:20:40.635135] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:48.971 [2024-07-26 23:20:40.635228] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:48.971 [2024-07-26 23:20:40.635255] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:48.971 [2024-07-26 23:20:40.635303] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:49.909 [2024-07-26 23:20:41.637117] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:49.909 [2024-07-26 23:20:41.637224] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:49.909 [2024-07-26 23:20:41.637254] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:49.909 [2024-07-26 23:20:41.637286] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:50.478 [2024-07-26 23:20:42.167194] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:50.478 [2024-07-26 23:20:42.167284] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:50.478 [2024-07-26 23:20:42.167318] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:50.478 [2024-07-26 23:20:42.167353] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:53.015 [2024-07-26 23:20:44.162773] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:53.015 [2024-07-26 23:20:44.162871] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:53.015 [2024-07-26 23:20:44.162906] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:53.015 [2024-07-26 23:20:44.162946] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65636) is not found. Dropping the request. 00:12:53.015 23:20:44 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:12:53.015 23:20:44 -- common/autotest_common.sh@1073 -- # echo 2 00:12:53.015 23:20:44 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:53.015 23:20:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:53.015 23:20:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.015 23:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:53.015 ************************************ 00:12:53.015 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:53.015 ************************************ 00:12:53.015 23:20:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:53.015 * Looking for test storage... 00:12:53.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:53.016 23:20:44 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:53.016 23:20:44 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:53.016 23:20:44 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:53.016 23:20:44 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:53.016 23:20:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:53.016 23:20:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:53.016 23:20:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:53.016 23:20:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:53.016 23:20:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:53.016 23:20:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:53.016 23:20:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:53.016 23:20:44 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65957 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:53.016 23:20:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65957 00:12:53.016 23:20:44 -- common/autotest_common.sh@819 -- # '[' -z 65957 ']' 00:12:53.016 23:20:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.016 23:20:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:53.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.016 23:20:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.016 23:20:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:53.016 23:20:44 -- common/autotest_common.sh@10 -- # set +x 00:12:53.275 [2024-07-26 23:20:44.790094] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:53.275 [2024-07-26 23:20:44.790600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65957 ] 00:12:53.275 [2024-07-26 23:20:44.977977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.534 [2024-07-26 23:20:45.191551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.534 [2024-07-26 23:20:45.192025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.534 [2024-07-26 23:20:45.192100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.534 [2024-07-26 23:20:45.192194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.534 [2024-07-26 23:20:45.192154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.913 23:20:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.914 23:20:46 -- common/autotest_common.sh@852 -- # return 0 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:12:54.914 23:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.914 23:20:46 -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 nvme0n1 00:12:54.914 23:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_RvW3p.txt 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:54.914 23:20:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.914 23:20:46 -- common/autotest_common.sh@10 -- # set +x 00:12:54.914 true 00:12:54.914 23:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1722036046 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65988 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:54.914 23:20:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:56.818 23:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.818 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:12:56.818 [2024-07-26 23:20:48.344289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:56.818 [2024-07-26 23:20:48.345196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:56.818 [2024-07-26 23:20:48.345319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:56.818 [2024-07-26 23:20:48.345343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.818 [2024-07-26 23:20:48.347761] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:56.818 23:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.818 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65988 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65988 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65988 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:56.818 23:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.818 23:20:48 -- common/autotest_common.sh@10 -- # set +x 00:12:56.818 23:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_RvW3p.txt 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:56.818 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_RvW3p.txt 00:12:56.819 23:20:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65957 00:12:56.819 23:20:48 -- common/autotest_common.sh@926 -- # '[' -z 65957 ']' 00:12:56.819 23:20:48 -- common/autotest_common.sh@930 -- # kill -0 65957 00:12:56.819 23:20:48 -- common/autotest_common.sh@931 -- # uname 00:12:56.819 23:20:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.819 23:20:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65957 00:12:56.819 23:20:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:56.819 23:20:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:56.819 killing process with pid 65957 00:12:56.819 23:20:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65957' 00:12:56.819 23:20:48 -- common/autotest_common.sh@945 -- # kill 65957 00:12:56.819 23:20:48 -- common/autotest_common.sh@950 -- # wait 65957 00:12:59.353 23:20:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:59.353 23:20:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:59.353 00:12:59.353 real 0m6.387s 00:12:59.353 user 0m21.974s 00:12:59.353 sys 0m0.794s 00:12:59.353 23:20:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.353 23:20:50 -- common/autotest_common.sh@10 -- # set +x 00:12:59.353 ************************************ 00:12:59.353 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:59.353 ************************************ 00:12:59.353 23:20:50 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:59.353 23:20:50 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:59.353 23:20:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:59.353 23:20:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.353 23:20:50 -- common/autotest_common.sh@10 -- # set +x 00:12:59.353 ************************************ 00:12:59.353 START TEST nvme_fio 00:12:59.353 ************************************ 00:12:59.353 23:20:50 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:12:59.353 23:20:50 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:59.353 23:20:50 -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:59.353 23:20:50 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:59.353 23:20:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:59.353 23:20:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:59.353 23:20:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:59.353 23:20:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:59.353 23:20:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:59.353 23:20:51 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:59.353 23:20:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:59.353 23:20:51 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:12:59.353 23:20:51 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:59.353 23:20:51 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:59.353 23:20:51 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:59.353 23:20:51 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:59.612 23:20:51 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:12:59.612 23:20:51 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:59.870 23:20:51 -- nvme/nvme.sh@41 -- # bs=4096 00:12:59.870 23:20:51 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:59.870 23:20:51 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:12:59.870 23:20:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:59.870 23:20:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:59.870 23:20:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:59.870 23:20:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:59.870 23:20:51 -- common/autotest_common.sh@1320 -- # shift 00:12:59.870 23:20:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:59.870 23:20:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:59.870 23:20:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:59.870 23:20:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:59.870 23:20:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:59.870 23:20:51 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:59.870 23:20:51 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:59.870 23:20:51 -- common/autotest_common.sh@1326 -- # break 00:12:59.870 23:20:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:59.870 23:20:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:00.129 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:00.129 fio-3.35 00:13:00.129 Starting 1 thread 00:13:04.320 00:13:04.320 test: (groupid=0, jobs=1): err= 0: pid=66139: Fri Jul 26 23:20:55 2024 00:13:04.320 read: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(182MiB/2001msec) 00:13:04.320 slat (nsec): min=3682, max=57561, avg=4095.02, stdev=814.59 00:13:04.320 clat (usec): min=199, max=11156, avg=2740.18, stdev=304.28 00:13:04.320 lat (usec): min=203, max=11213, avg=2744.27, stdev=304.65 00:13:04.320 clat percentiles (usec): 00:13:04.320 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:13:04.320 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:13:04.320 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 2933], 00:13:04.320 | 99.00th=[ 3818], 99.50th=[ 4359], 99.90th=[ 6063], 99.95th=[ 8717], 00:13:04.320 | 99.99th=[11076] 00:13:04.320 bw ( KiB/s): min=90304, max=93928, per=99.18%, avg=92376.00, stdev=1867.12, samples=3 00:13:04.320 iops : min=22576, max=23482, avg=23094.00, stdev=466.78, samples=3 00:13:04.320 write: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(181MiB/2001msec); 0 zone resets 00:13:04.320 slat (nsec): min=3804, max=62295, avg=4561.27, stdev=885.65 00:13:04.320 clat (usec): min=175, max=11073, avg=2749.87, stdev=313.35 00:13:04.320 lat (usec): min=180, max=11086, avg=2754.44, stdev=313.71 00:13:04.320 clat percentiles (usec): 00:13:04.320 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:13:04.320 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:13:04.320 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 2966], 00:13:04.320 | 99.00th=[ 3884], 99.50th=[ 4424], 99.90th=[ 6718], 99.95th=[ 8979], 00:13:04.320 | 99.99th=[10814] 00:13:04.320 bw ( KiB/s): min=89712, max=94496, per=99.94%, avg=92458.67, stdev=2469.62, samples=3 00:13:04.320 iops : min=22428, max=23624, avg=23114.67, stdev=617.41, samples=3 00:13:04.320 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:13:04.320 lat (msec) : 2=0.29%, 4=98.83%, 10=0.81%, 20=0.03% 00:13:04.320 cpu : usr=99.50%, sys=0.00%, ctx=2, majf=0, minf=607 00:13:04.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:04.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:04.320 issued rwts: total=46593,46281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:04.320 00:13:04.320 Run status group 0 (all jobs): 00:13:04.320 READ: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=182MiB (191MB), run=2001-2001msec 00:13:04.320 WRITE: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=181MiB (190MB), run=2001-2001msec 00:13:04.320 ----------------------------------------------------- 00:13:04.320 Suppressions used: 00:13:04.320 count bytes template 00:13:04.320 1 32 /usr/src/fio/parse.c 00:13:04.320 1 8 libtcmalloc_minimal.so 00:13:04.320 ----------------------------------------------------- 00:13:04.320 00:13:04.320 23:20:55 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:04.320 23:20:55 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:04.320 23:20:55 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:04.321 23:20:55 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:04.321 23:20:55 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:04.321 23:20:55 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:04.580 23:20:56 -- nvme/nvme.sh@41 -- # bs=4096 00:13:04.580 23:20:56 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:04.580 23:20:56 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:04.580 23:20:56 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:04.580 23:20:56 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:04.580 23:20:56 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:04.580 23:20:56 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:04.580 23:20:56 -- common/autotest_common.sh@1320 -- # shift 00:13:04.580 23:20:56 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:04.580 23:20:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:04.580 23:20:56 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:04.580 23:20:56 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:04.580 23:20:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:04.580 23:20:56 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:04.580 23:20:56 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:04.580 23:20:56 -- common/autotest_common.sh@1326 -- # break 00:13:04.580 23:20:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:04.580 23:20:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:04.839 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:04.839 fio-3.35 00:13:04.839 Starting 1 thread 00:13:09.036 00:13:09.036 test: (groupid=0, jobs=1): err= 0: pid=66212: Fri Jul 26 23:21:00 2024 00:13:09.036 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(183MiB/2001msec) 00:13:09.036 slat (nsec): min=3700, max=61631, avg=4078.70, stdev=860.31 00:13:09.036 clat (usec): min=208, max=10845, avg=2731.02, stdev=275.18 00:13:09.036 lat (usec): min=212, max=10906, avg=2735.10, stdev=275.60 00:13:09.036 clat percentiles (usec): 00:13:09.036 | 1.00th=[ 2376], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2606], 00:13:09.036 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:13:09.036 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2900], 00:13:09.036 | 99.00th=[ 3425], 99.50th=[ 4178], 99.90th=[ 5866], 99.95th=[ 8225], 00:13:09.036 | 99.99th=[10683] 00:13:09.036 bw ( KiB/s): min=91056, max=94640, per=99.38%, avg=92869.33, stdev=1792.38, samples=3 00:13:09.036 iops : min=22764, max=23660, avg=23217.33, stdev=448.10, samples=3 00:13:09.036 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(181MiB/2001msec); 0 zone resets 00:13:09.036 slat (nsec): min=3806, max=56293, avg=4537.20, stdev=890.01 00:13:09.036 clat (usec): min=178, max=10762, avg=2738.72, stdev=284.83 00:13:09.036 lat (usec): min=182, max=10775, avg=2743.26, stdev=285.21 00:13:09.036 clat percentiles (usec): 00:13:09.036 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:13:09.036 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:13:09.036 | 70.00th=[ 2802], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2900], 00:13:09.036 | 99.00th=[ 3490], 99.50th=[ 4293], 99.90th=[ 6259], 99.95th=[ 8586], 00:13:09.036 | 99.99th=[10290] 00:13:09.036 bw ( KiB/s): min=90512, max=94464, per=100.00%, avg=92984.00, stdev=2154.67, samples=3 00:13:09.036 iops : min=22628, max=23616, avg=23246.00, stdev=538.67, samples=3 00:13:09.036 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:13:09.036 lat (msec) : 2=0.05%, 4=99.25%, 10=0.64%, 20=0.02% 00:13:09.036 cpu : usr=99.55%, sys=0.00%, ctx=7, majf=0, minf=607 00:13:09.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:09.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.036 issued rwts: total=46747,46458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.036 00:13:09.036 Run status group 0 (all jobs): 00:13:09.036 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:13:09.036 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=181MiB (190MB), run=2001-2001msec 00:13:09.036 ----------------------------------------------------- 00:13:09.036 Suppressions used: 00:13:09.036 count bytes template 00:13:09.036 1 32 /usr/src/fio/parse.c 00:13:09.036 1 8 libtcmalloc_minimal.so 00:13:09.036 ----------------------------------------------------- 00:13:09.036 00:13:09.036 23:21:00 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:09.036 23:21:00 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:09.036 23:21:00 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:09.036 23:21:00 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:09.036 23:21:00 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:09.036 23:21:00 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:09.295 23:21:00 -- nvme/nvme.sh@41 -- # bs=4096 00:13:09.295 23:21:00 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:09.295 23:21:00 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:09.295 23:21:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:09.295 23:21:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.295 23:21:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:09.295 23:21:00 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:09.295 23:21:00 -- common/autotest_common.sh@1320 -- # shift 00:13:09.296 23:21:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:09.296 23:21:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.296 23:21:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:09.296 23:21:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:09.296 23:21:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:09.296 23:21:00 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.296 23:21:00 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.296 23:21:00 -- common/autotest_common.sh@1326 -- # break 00:13:09.296 23:21:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:09.296 23:21:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:09.296 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:09.296 fio-3.35 00:13:09.296 Starting 1 thread 00:13:13.488 00:13:13.488 test: (groupid=0, jobs=1): err= 0: pid=66279: Fri Jul 26 23:21:04 2024 00:13:13.488 read: IOPS=23.5k, BW=92.0MiB/s (96.5MB/s)(184MiB/2001msec) 00:13:13.488 slat (nsec): min=3690, max=60108, avg=4088.97, stdev=856.38 00:13:13.488 clat (usec): min=182, max=11634, avg=2707.62, stdev=357.69 00:13:13.488 lat (usec): min=186, max=11694, avg=2711.71, stdev=358.09 00:13:13.488 clat percentiles (usec): 00:13:13.488 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:13:13.488 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:13:13.488 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2933], 00:13:13.488 | 99.00th=[ 4015], 99.50th=[ 4817], 99.90th=[ 6718], 99.95th=[ 8717], 00:13:13.488 | 99.99th=[11338] 00:13:13.488 bw ( KiB/s): min=91752, max=95136, per=99.68%, avg=93890.67, stdev=1860.48, samples=3 00:13:13.488 iops : min=22938, max=23784, avg=23472.67, stdev=465.12, samples=3 00:13:13.488 write: IOPS=23.4k, BW=91.4MiB/s (95.8MB/s)(183MiB/2001msec); 0 zone resets 00:13:13.488 slat (nsec): min=3807, max=34935, avg=4558.96, stdev=877.22 00:13:13.488 clat (usec): min=251, max=11468, avg=2720.64, stdev=367.58 00:13:13.488 lat (usec): min=256, max=11481, avg=2725.20, stdev=367.96 00:13:13.488 clat percentiles (usec): 00:13:13.488 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:13:13.488 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:13:13.488 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 2966], 00:13:13.488 | 99.00th=[ 4113], 99.50th=[ 4948], 99.90th=[ 7046], 99.95th=[ 9110], 00:13:13.488 | 99.99th=[10945] 00:13:13.488 bw ( KiB/s): min=91448, max=96376, per=100.00%, avg=93973.33, stdev=2466.29, samples=3 00:13:13.488 iops : min=22862, max=24094, avg=23493.33, stdev=616.57, samples=3 00:13:13.488 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.03% 00:13:13.488 lat (msec) : 2=0.55%, 4=98.30%, 10=1.05%, 20=0.03% 00:13:13.488 cpu : usr=99.50%, sys=0.05%, ctx=5, majf=0, minf=607 00:13:13.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:13.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:13.488 issued rwts: total=47121,46799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:13.488 00:13:13.488 Run status group 0 (all jobs): 00:13:13.488 READ: bw=92.0MiB/s (96.5MB/s), 92.0MiB/s-92.0MiB/s (96.5MB/s-96.5MB/s), io=184MiB (193MB), run=2001-2001msec 00:13:13.488 WRITE: bw=91.4MiB/s (95.8MB/s), 91.4MiB/s-91.4MiB/s (95.8MB/s-95.8MB/s), io=183MiB (192MB), run=2001-2001msec 00:13:13.488 ----------------------------------------------------- 00:13:13.488 Suppressions used: 00:13:13.488 count bytes template 00:13:13.488 1 32 /usr/src/fio/parse.c 00:13:13.488 1 8 libtcmalloc_minimal.so 00:13:13.488 ----------------------------------------------------- 00:13:13.488 00:13:13.488 23:21:05 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:13.488 23:21:05 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:13.488 23:21:05 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:13.488 23:21:05 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:13.747 23:21:05 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:13.747 23:21:05 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:14.006 23:21:05 -- nvme/nvme.sh@41 -- # bs=4096 00:13:14.006 23:21:05 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.006 23:21:05 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.006 23:21:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:14.006 23:21:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.006 23:21:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:14.006 23:21:05 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:14.006 23:21:05 -- common/autotest_common.sh@1320 -- # shift 00:13:14.006 23:21:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:14.006 23:21:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.006 23:21:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:14.006 23:21:05 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:14.006 23:21:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:14.006 23:21:05 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.006 23:21:05 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.006 23:21:05 -- common/autotest_common.sh@1326 -- # break 00:13:14.006 23:21:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:14.007 23:21:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:14.007 fio-3.35 00:13:14.007 Starting 1 thread 00:13:19.342 00:13:19.342 test: (groupid=0, jobs=1): err= 0: pid=66345: Fri Jul 26 23:21:10 2024 00:13:19.342 read: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec) 00:13:19.342 slat (nsec): min=4178, max=71339, avg=5001.01, stdev=1178.19 00:13:19.342 clat (usec): min=233, max=8755, avg=3015.33, stdev=460.95 00:13:19.342 lat (usec): min=238, max=8768, avg=3020.33, stdev=461.35 00:13:19.342 clat percentiles (usec): 00:13:19.342 | 1.00th=[ 1876], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:13:19.342 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3032], 00:13:19.342 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3359], 95.00th=[ 3490], 00:13:19.342 | 99.00th=[ 4752], 99.50th=[ 5473], 99.90th=[ 8094], 99.95th=[ 8455], 00:13:19.342 | 99.99th=[ 8586] 00:13:19.342 bw ( KiB/s): min=77168, max=84208, per=96.45%, avg=81752.00, stdev=3973.25, samples=3 00:13:19.342 iops : min=19292, max=21052, avg=20438.00, stdev=993.31, samples=3 00:13:19.342 write: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(164MiB/2001msec); 0 zone resets 00:13:19.342 slat (nsec): min=4304, max=65395, avg=5153.38, stdev=1098.48 00:13:19.342 clat (usec): min=271, max=8691, avg=3025.79, stdev=473.76 00:13:19.342 lat (usec): min=276, max=8704, avg=3030.94, stdev=474.16 00:13:19.342 clat percentiles (usec): 00:13:19.342 | 1.00th=[ 1909], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:13:19.342 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:13:19.342 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3359], 95.00th=[ 3490], 00:13:19.342 | 99.00th=[ 4752], 99.50th=[ 5800], 99.90th=[ 8291], 99.95th=[ 8455], 00:13:19.342 | 99.99th=[ 8586] 00:13:19.342 bw ( KiB/s): min=77048, max=84392, per=97.27%, avg=81877.33, stdev=4183.52, samples=3 00:13:19.342 iops : min=19262, max=21100, avg=20470.00, stdev=1046.48, samples=3 00:13:19.342 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:13:19.342 lat (msec) : 2=1.23%, 4=96.49%, 10=2.22% 00:13:19.342 cpu : usr=99.15%, sys=0.20%, ctx=5, majf=0, minf=605 00:13:19.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:19.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:19.342 issued rwts: total=42402,42110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:19.342 00:13:19.342 Run status group 0 (all jobs): 00:13:19.342 READ: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:13:19.342 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=164MiB (172MB), run=2001-2001msec 00:13:19.602 ----------------------------------------------------- 00:13:19.602 Suppressions used: 00:13:19.602 count bytes template 00:13:19.602 1 32 /usr/src/fio/parse.c 00:13:19.602 1 8 libtcmalloc_minimal.so 00:13:19.602 ----------------------------------------------------- 00:13:19.602 00:13:19.602 23:21:11 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:19.602 ************************************ 00:13:19.602 END TEST nvme_fio 00:13:19.602 ************************************ 00:13:19.602 23:21:11 -- nvme/nvme.sh@46 -- # true 00:13:19.602 00:13:19.602 real 0m20.275s 00:13:19.602 user 0m14.845s 00:13:19.602 sys 0m7.227s 00:13:19.602 23:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.602 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 ************************************ 00:13:19.602 END TEST nvme 00:13:19.602 ************************************ 00:13:19.602 00:13:19.602 real 1m39.331s 00:13:19.602 user 3m45.017s 00:13:19.602 sys 0m25.092s 00:13:19.602 23:21:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.602 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 23:21:11 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:19.602 23:21:11 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:19.602 23:21:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:19.602 23:21:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.602 23:21:11 -- common/autotest_common.sh@10 -- # set +x 00:13:19.602 ************************************ 00:13:19.602 START TEST nvme_scc 00:13:19.602 ************************************ 00:13:19.602 23:21:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:19.861 * Looking for test storage... 00:13:19.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:19.861 23:21:11 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:19.861 23:21:11 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:19.861 23:21:11 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:19.861 23:21:11 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:19.861 23:21:11 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.861 23:21:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.861 23:21:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.861 23:21:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.861 23:21:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.861 23:21:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.861 23:21:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.861 23:21:11 -- paths/export.sh@5 -- # export PATH 00:13:19.861 23:21:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.861 23:21:11 -- nvme/functions.sh@10 -- # ctrls=() 00:13:19.861 23:21:11 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:19.861 23:21:11 -- nvme/functions.sh@11 -- # nvmes=() 00:13:19.861 23:21:11 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:19.861 23:21:11 -- nvme/functions.sh@12 -- # bdfs=() 00:13:19.861 23:21:11 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:19.861 23:21:11 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:19.861 23:21:11 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:19.861 23:21:11 -- nvme/functions.sh@14 -- # nvme_name= 00:13:19.861 23:21:11 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.861 23:21:11 -- nvme/nvme_scc.sh@12 -- # uname 00:13:19.861 23:21:11 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:19.861 23:21:11 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:19.861 23:21:11 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:20.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.799 Waiting for block devices as requested 00:13:20.799 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.799 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.058 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.058 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.353 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:26.353 23:21:17 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:26.353 23:21:17 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:26.353 23:21:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.353 23:21:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:26.353 23:21:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:26.353 23:21:17 -- scripts/common.sh@15 -- # local i 00:13:26.353 23:21:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:26.353 23:21:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.353 23:21:17 -- scripts/common.sh@24 -- # return 0 00:13:26.353 23:21:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:26.353 23:21:17 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:26.353 23:21:17 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@18 -- # shift 00:13:26.353 23:21:17 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.353 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.353 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.353 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:26.354 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.354 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.354 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.355 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.355 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:26.355 23:21:17 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:26.356 23:21:17 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:26.356 23:21:17 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:26.356 23:21:17 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:26.356 23:21:17 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:26.356 23:21:17 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.356 23:21:17 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:26.356 23:21:17 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:26.356 23:21:17 -- scripts/common.sh@15 -- # local i 00:13:26.356 23:21:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:26.356 23:21:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.356 23:21:17 -- scripts/common.sh@24 -- # return 0 00:13:26.356 23:21:17 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:26.356 23:21:17 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:26.356 23:21:17 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@18 -- # shift 00:13:26.356 23:21:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:26.356 23:21:17 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.356 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.356 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.357 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.357 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.357 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:26.358 23:21:17 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.358 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.358 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:26.359 23:21:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.359 23:21:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:26.359 23:21:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:26.359 23:21:17 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@18 -- # shift 00:13:26.359 23:21:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.359 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.359 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.359 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:26.360 23:21:17 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.360 23:21:17 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:26.360 23:21:17 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:26.360 23:21:17 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@18 -- # shift 00:13:26.360 23:21:17 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.360 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:26.360 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.360 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:17 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:26.361 23:21:17 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:26.361 23:21:17 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.361 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.361 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.361 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:26.362 23:21:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.362 23:21:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:26.362 23:21:18 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:26.362 23:21:18 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@18 -- # shift 00:13:26.362 23:21:18 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.362 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.362 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.362 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:26.363 23:21:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:26.363 23:21:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:26.363 23:21:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:26.363 23:21:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:26.363 23:21:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.363 23:21:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:26.363 23:21:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:26.363 23:21:18 -- scripts/common.sh@15 -- # local i 00:13:26.363 23:21:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:26.363 23:21:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.363 23:21:18 -- scripts/common.sh@24 -- # return 0 00:13:26.363 23:21:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:26.363 23:21:18 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:26.363 23:21:18 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@18 -- # shift 00:13:26.363 23:21:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.363 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.363 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:26.363 23:21:18 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.364 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:26.364 23:21:18 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.364 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.365 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.365 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:26.365 23:21:18 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:26.628 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.628 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.628 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:26.629 23:21:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.629 23:21:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:26.629 23:21:18 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:26.629 23:21:18 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@18 -- # shift 00:13:26.629 23:21:18 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:26.629 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.629 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.629 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.630 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.630 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.630 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:26.631 23:21:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:26.631 23:21:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:26.631 23:21:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:26.631 23:21:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:26.631 23:21:18 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.631 23:21:18 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:26.631 23:21:18 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:26.631 23:21:18 -- scripts/common.sh@15 -- # local i 00:13:26.631 23:21:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:26.631 23:21:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.631 23:21:18 -- scripts/common.sh@24 -- # return 0 00:13:26.631 23:21:18 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:26.631 23:21:18 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:26.631 23:21:18 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@18 -- # shift 00:13:26.631 23:21:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.631 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.631 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:26.631 23:21:18 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.632 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.632 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:26.632 23:21:18 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.633 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.633 23:21:18 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.633 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:26.634 23:21:18 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.634 23:21:18 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:26.634 23:21:18 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:26.634 23:21:18 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@18 -- # shift 00:13:26.634 23:21:18 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.634 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.634 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:26.634 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.635 23:21:18 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.635 23:21:18 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.635 23:21:18 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:26.635 23:21:18 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:26.635 23:21:18 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:26.635 23:21:18 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:26.635 23:21:18 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:26.635 23:21:18 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:26.635 23:21:18 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:26.635 23:21:18 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:26.635 23:21:18 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:26.635 23:21:18 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:26.635 23:21:18 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # echo nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:26.635 23:21:18 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:26.635 23:21:18 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:26.635 23:21:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:26.635 23:21:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # echo nvme0 00:13:26.635 23:21:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # echo nvme3 00:13:26.635 23:21:18 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:26.635 23:21:18 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:26.635 23:21:18 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:26.635 23:21:18 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:26.635 23:21:18 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:26.635 23:21:18 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.635 23:21:18 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.635 23:21:18 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@197 -- # echo nvme2 00:13:26.635 23:21:18 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:26.635 23:21:18 -- nvme/functions.sh@206 -- # echo nvme1 00:13:26.635 23:21:18 -- nvme/functions.sh@207 -- # return 0 00:13:26.635 23:21:18 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:26.635 23:21:18 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:13:26.635 23:21:18 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:28.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.014 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.014 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.014 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.273 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.273 23:21:19 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:28.273 23:21:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:28.273 23:21:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.273 23:21:19 -- common/autotest_common.sh@10 -- # set +x 00:13:28.273 ************************************ 00:13:28.273 START TEST nvme_simple_copy 00:13:28.274 ************************************ 00:13:28.274 23:21:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:28.533 Initializing NVMe Controllers 00:13:28.533 Attaching to 0000:00:08.0 00:13:28.533 Controller supports SCC. Attached to 0000:00:08.0 00:13:28.533 Namespace ID: 1 size: 4GB 00:13:28.533 Initialization complete. 00:13:28.533 00:13:28.533 Controller QEMU NVMe Ctrl (12342 ) 00:13:28.533 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:28.533 Namespace Block Size:4096 00:13:28.533 Writing LBAs 0 to 63 with Random Data 00:13:28.533 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:28.533 LBAs matching Written Data: 64 00:13:28.533 00:13:28.533 real 0m0.295s 00:13:28.533 user 0m0.108s 00:13:28.533 sys 0m0.086s 00:13:28.533 ************************************ 00:13:28.533 END TEST nvme_simple_copy 00:13:28.533 ************************************ 00:13:28.533 23:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.533 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:13:28.533 ************************************ 00:13:28.533 END TEST nvme_scc 00:13:28.533 ************************************ 00:13:28.533 00:13:28.533 real 0m8.901s 00:13:28.533 user 0m1.519s 00:13:28.533 sys 0m2.465s 00:13:28.533 23:21:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.533 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:13:28.792 23:21:20 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:13:28.792 23:21:20 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:13:28.792 23:21:20 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:13:28.792 23:21:20 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:13:28.792 23:21:20 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:28.792 23:21:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:28.792 23:21:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.792 23:21:20 -- common/autotest_common.sh@10 -- # set +x 00:13:28.792 ************************************ 00:13:28.792 START TEST nvme_fdp 00:13:28.792 ************************************ 00:13:28.792 23:21:20 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:13:28.792 * Looking for test storage... 00:13:28.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:28.792 23:21:20 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.792 23:21:20 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.792 23:21:20 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:28.792 23:21:20 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:28.792 23:21:20 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.792 23:21:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.792 23:21:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.792 23:21:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.792 23:21:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.792 23:21:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.792 23:21:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.792 23:21:20 -- paths/export.sh@5 -- # export PATH 00:13:28.792 23:21:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.792 23:21:20 -- nvme/functions.sh@10 -- # ctrls=() 00:13:28.792 23:21:20 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:28.792 23:21:20 -- nvme/functions.sh@11 -- # nvmes=() 00:13:28.792 23:21:20 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:28.792 23:21:20 -- nvme/functions.sh@12 -- # bdfs=() 00:13:28.792 23:21:20 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:28.792 23:21:20 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:28.792 23:21:20 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:28.792 23:21:20 -- nvme/functions.sh@14 -- # nvme_name= 00:13:28.792 23:21:20 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.792 23:21:20 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:29.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:29.730 Waiting for block devices as requested 00:13:29.730 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.730 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.989 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.989 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.268 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:35.268 23:21:26 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:35.268 23:21:26 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:35.268 23:21:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.268 23:21:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:35.268 23:21:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:35.268 23:21:26 -- scripts/common.sh@15 -- # local i 00:13:35.268 23:21:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:35.268 23:21:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.268 23:21:26 -- scripts/common.sh@24 -- # return 0 00:13:35.268 23:21:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:35.268 23:21:26 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:35.268 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.268 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.268 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:35.268 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.268 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.269 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:35.269 23:21:26 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.269 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.270 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.270 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.270 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:35.271 23:21:26 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:35.271 23:21:26 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:35.271 23:21:26 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:35.271 23:21:26 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:35.271 23:21:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.271 23:21:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:35.271 23:21:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:35.271 23:21:26 -- scripts/common.sh@15 -- # local i 00:13:35.271 23:21:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:35.271 23:21:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.271 23:21:26 -- scripts/common.sh@24 -- # return 0 00:13:35.271 23:21:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:35.271 23:21:26 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:35.271 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.271 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.271 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.271 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:35.271 23:21:26 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.272 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.272 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.272 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.273 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.273 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:35.273 23:21:26 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:35.274 23:21:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.274 23:21:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:35.274 23:21:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:35.274 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.274 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.274 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.274 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.274 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.275 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.275 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.275 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:35.276 23:21:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.276 23:21:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:35.276 23:21:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:35.276 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.276 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.276 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.276 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.276 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:35.277 23:21:26 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.277 23:21:26 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:35.277 23:21:26 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:35.277 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.277 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.277 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.277 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:35.277 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.278 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.278 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.278 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:35.279 23:21:26 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:35.279 23:21:26 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:35.279 23:21:26 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:35.279 23:21:26 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:35.279 23:21:26 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.279 23:21:26 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:35.279 23:21:26 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:35.279 23:21:26 -- scripts/common.sh@15 -- # local i 00:13:35.279 23:21:26 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:35.279 23:21:26 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.279 23:21:26 -- scripts/common.sh@24 -- # return 0 00:13:35.279 23:21:26 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:35.279 23:21:26 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:35.279 23:21:26 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@18 -- # shift 00:13:35.279 23:21:26 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.279 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:35.279 23:21:26 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:35.279 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.280 23:21:26 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.280 23:21:26 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.280 23:21:26 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:35.280 23:21:27 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:35.280 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:35.281 23:21:27 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:35.281 23:21:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.281 23:21:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:35.281 23:21:27 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:35.281 23:21:27 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:35.281 23:21:27 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@18 -- # shift 00:13:35.281 23:21:27 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.281 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.281 23:21:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.544 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:35.544 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.544 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:35.545 23:21:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:35.545 23:21:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:35.545 23:21:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:35.545 23:21:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:35.545 23:21:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.545 23:21:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:35.545 23:21:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:35.545 23:21:27 -- scripts/common.sh@15 -- # local i 00:13:35.545 23:21:27 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:35.545 23:21:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.545 23:21:27 -- scripts/common.sh@24 -- # return 0 00:13:35.545 23:21:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:35.545 23:21:27 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:35.545 23:21:27 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@18 -- # shift 00:13:35.545 23:21:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.545 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.545 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:35.545 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.546 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.546 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.546 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.547 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.547 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.547 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:35.548 23:21:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.548 23:21:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:35.548 23:21:27 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:35.548 23:21:27 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@18 -- # shift 00:13:35.548 23:21:27 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.548 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.548 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:35.548 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.549 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.549 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.549 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.550 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.550 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.550 23:21:27 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.550 23:21:27 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.550 23:21:27 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.550 23:21:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:35.550 23:21:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:35.550 23:21:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:35.550 23:21:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:35.550 23:21:27 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:35.550 23:21:27 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:35.550 23:21:27 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:35.550 23:21:27 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:35.550 23:21:27 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.550 23:21:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:35.550 23:21:27 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:35.550 23:21:27 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:35.550 23:21:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:35.550 23:21:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.550 23:21:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@76 -- # echo 0x88010 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:35.550 23:21:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@197 -- # echo nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.550 23:21:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:35.550 23:21:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.550 23:21:27 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:35.550 23:21:27 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:35.550 23:21:27 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:35.550 23:21:27 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:35.550 23:21:27 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:35.550 23:21:27 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.550 23:21:27 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.550 23:21:27 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # trap - ERR 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # print_backtrace 00:13:35.550 23:21:27 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:35.550 23:21:27 -- common/autotest_common.sh@1132 -- # return 0 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # trap - ERR 00:13:35.550 23:21:27 -- nvme/functions.sh@204 -- # print_backtrace 00:13:35.550 23:21:27 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:35.550 23:21:27 -- common/autotest_common.sh@1132 -- # return 0 00:13:35.550 23:21:27 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:35.550 23:21:27 -- nvme/functions.sh@206 -- # echo nvme0 00:13:35.550 23:21:27 -- nvme/functions.sh@207 -- # return 0 00:13:35.550 23:21:27 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:13:35.550 23:21:27 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:13:35.550 23:21:27 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:36.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:36.927 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.927 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.927 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.186 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:37.186 23:21:28 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:37.186 23:21:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:37.186 23:21:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.186 23:21:28 -- common/autotest_common.sh@10 -- # set +x 00:13:37.186 ************************************ 00:13:37.186 START TEST nvme_flexible_data_placement 00:13:37.186 ************************************ 00:13:37.186 23:21:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:37.446 Initializing NVMe Controllers 00:13:37.446 Attaching to 0000:00:09.0 00:13:37.446 Controller supports FDP Attached to 0000:00:09.0 00:13:37.446 Namespace ID: 1 Endurance Group ID: 1 00:13:37.446 Initialization complete. 00:13:37.446 00:13:37.446 ================================== 00:13:37.446 == FDP tests for Namespace: #01 == 00:13:37.446 ================================== 00:13:37.446 00:13:37.446 Get Feature: FDP: 00:13:37.446 ================= 00:13:37.446 Enabled: Yes 00:13:37.446 FDP configuration Index: 0 00:13:37.446 00:13:37.446 FDP configurations log page 00:13:37.446 =========================== 00:13:37.446 Number of FDP configurations: 1 00:13:37.446 Version: 0 00:13:37.446 Size: 112 00:13:37.446 FDP Configuration Descriptor: 0 00:13:37.446 Descriptor Size: 96 00:13:37.446 Reclaim Group Identifier format: 2 00:13:37.446 FDP Volatile Write Cache: Not Present 00:13:37.446 FDP Configuration: Valid 00:13:37.446 Vendor Specific Size: 0 00:13:37.446 Number of Reclaim Groups: 2 00:13:37.446 Number of Recalim Unit Handles: 8 00:13:37.446 Max Placement Identifiers: 128 00:13:37.446 Number of Namespaces Suppprted: 256 00:13:37.446 Reclaim unit Nominal Size: 6000000 bytes 00:13:37.446 Estimated Reclaim Unit Time Limit: Not Reported 00:13:37.446 RUH Desc #000: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #001: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #002: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #003: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #004: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #005: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #006: RUH Type: Initially Isolated 00:13:37.446 RUH Desc #007: RUH Type: Initially Isolated 00:13:37.446 00:13:37.446 FDP reclaim unit handle usage log page 00:13:37.446 ====================================== 00:13:37.446 Number of Reclaim Unit Handles: 8 00:13:37.446 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:37.446 RUH Usage Desc #001: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #002: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #003: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #004: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #005: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #006: RUH Attributes: Unused 00:13:37.446 RUH Usage Desc #007: RUH Attributes: Unused 00:13:37.446 00:13:37.446 FDP statistics log page 00:13:37.446 ======================= 00:13:37.446 Host bytes with metadata written: 973778944 00:13:37.446 Media bytes with metadata written: 974032896 00:13:37.446 Media bytes erased: 0 00:13:37.446 00:13:37.446 FDP Reclaim unit handle status 00:13:37.446 ============================== 00:13:37.446 Number of RUHS descriptors: 2 00:13:37.446 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001f55 00:13:37.446 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:37.446 00:13:37.446 FDP write on placement id: 0 success 00:13:37.446 00:13:37.446 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:37.446 00:13:37.446 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:37.446 00:13:37.446 Get Feature: FDP Events for Placement handle: #0 00:13:37.446 ======================== 00:13:37.446 Number of FDP Events: 6 00:13:37.446 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:37.446 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:37.446 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:37.446 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:37.446 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:37.446 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:37.446 00:13:37.446 FDP events log page 00:13:37.446 =================== 00:13:37.446 Number of FDP events: 1 00:13:37.446 FDP Event #0: 00:13:37.446 Event Type: RU Not Written to Capacity 00:13:37.446 Placement Identifier: Valid 00:13:37.446 NSID: Valid 00:13:37.446 Location: Valid 00:13:37.446 Placement Identifier: 0 00:13:37.446 Event Timestamp: b 00:13:37.446 Namespace Identifier: 1 00:13:37.446 Reclaim Group Identifier: 0 00:13:37.446 Reclaim Unit Handle Identifier: 0 00:13:37.446 00:13:37.446 FDP test passed 00:13:37.446 00:13:37.446 real 0m0.276s 00:13:37.446 user 0m0.089s 00:13:37.446 sys 0m0.086s 00:13:37.446 23:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.446 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:13:37.446 ************************************ 00:13:37.446 END TEST nvme_flexible_data_placement 00:13:37.446 ************************************ 00:13:37.446 00:13:37.446 real 0m8.877s 00:13:37.446 user 0m1.505s 00:13:37.446 sys 0m2.518s 00:13:37.446 23:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.446 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:13:37.446 ************************************ 00:13:37.446 END TEST nvme_fdp 00:13:37.446 ************************************ 00:13:37.705 23:21:29 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:13:37.705 23:21:29 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:37.705 23:21:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:37.705 23:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.705 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:13:37.705 ************************************ 00:13:37.705 START TEST nvme_rpc 00:13:37.705 ************************************ 00:13:37.705 23:21:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:37.705 * Looking for test storage... 00:13:37.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:37.705 23:21:29 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.705 23:21:29 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:37.705 23:21:29 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:37.705 23:21:29 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:37.705 23:21:29 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:37.705 23:21:29 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:37.705 23:21:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:37.705 23:21:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:37.705 23:21:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:37.705 23:21:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:37.705 23:21:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:37.964 23:21:29 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:37.964 23:21:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:37.964 23:21:29 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:13:37.964 23:21:29 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:13:37.964 23:21:29 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67771 00:13:37.964 23:21:29 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:37.964 23:21:29 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:37.964 23:21:29 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67771 00:13:37.964 23:21:29 -- common/autotest_common.sh@819 -- # '[' -z 67771 ']' 00:13:37.964 23:21:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.964 23:21:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.964 23:21:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.964 23:21:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.964 23:21:29 -- common/autotest_common.sh@10 -- # set +x 00:13:37.964 [2024-07-26 23:21:29.606501] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:37.964 [2024-07-26 23:21:29.606612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67771 ] 00:13:38.224 [2024-07-26 23:21:29.775042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.483 [2024-07-26 23:21:29.982041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.483 [2024-07-26 23:21:29.982422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.483 [2024-07-26 23:21:29.982457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.419 23:21:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.419 23:21:31 -- common/autotest_common.sh@852 -- # return 0 00:13:39.419 23:21:31 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:13:39.677 Nvme0n1 00:13:39.677 23:21:31 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:39.677 23:21:31 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:39.677 request: 00:13:39.677 { 00:13:39.677 "filename": "non_existing_file", 00:13:39.677 "bdev_name": "Nvme0n1", 00:13:39.677 "method": "bdev_nvme_apply_firmware", 00:13:39.677 "req_id": 1 00:13:39.677 } 00:13:39.677 Got JSON-RPC error response 00:13:39.677 response: 00:13:39.677 { 00:13:39.677 "code": -32603, 00:13:39.677 "message": "open file failed." 00:13:39.677 } 00:13:39.936 23:21:31 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:39.936 23:21:31 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:39.936 23:21:31 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:39.936 23:21:31 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:39.936 23:21:31 -- nvme/nvme_rpc.sh@40 -- # killprocess 67771 00:13:39.936 23:21:31 -- common/autotest_common.sh@926 -- # '[' -z 67771 ']' 00:13:39.936 23:21:31 -- common/autotest_common.sh@930 -- # kill -0 67771 00:13:39.936 23:21:31 -- common/autotest_common.sh@931 -- # uname 00:13:39.936 23:21:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.936 23:21:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67771 00:13:39.936 23:21:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:39.936 23:21:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:39.936 killing process with pid 67771 00:13:39.936 23:21:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67771' 00:13:39.936 23:21:31 -- common/autotest_common.sh@945 -- # kill 67771 00:13:39.936 23:21:31 -- common/autotest_common.sh@950 -- # wait 67771 00:13:42.468 00:13:42.468 real 0m4.552s 00:13:42.468 user 0m8.164s 00:13:42.468 sys 0m0.746s 00:13:42.468 23:21:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.468 ************************************ 00:13:42.468 END TEST nvme_rpc 00:13:42.468 ************************************ 00:13:42.468 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:13:42.469 23:21:33 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:42.469 23:21:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:42.469 23:21:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.469 23:21:33 -- common/autotest_common.sh@10 -- # set +x 00:13:42.469 ************************************ 00:13:42.469 START TEST nvme_rpc_timeouts 00:13:42.469 ************************************ 00:13:42.469 23:21:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:42.469 * Looking for test storage... 00:13:42.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67849 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67849 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67872 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:42.469 23:21:34 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67872 00:13:42.469 23:21:34 -- common/autotest_common.sh@819 -- # '[' -z 67872 ']' 00:13:42.469 23:21:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.469 23:21:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.469 23:21:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.469 23:21:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.469 23:21:34 -- common/autotest_common.sh@10 -- # set +x 00:13:42.469 [2024-07-26 23:21:34.106945] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:42.469 [2024-07-26 23:21:34.107090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67872 ] 00:13:42.728 [2024-07-26 23:21:34.277133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.987 [2024-07-26 23:21:34.483640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.987 [2024-07-26 23:21:34.483986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.987 [2024-07-26 23:21:34.484043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.922 23:21:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.922 23:21:35 -- common/autotest_common.sh@852 -- # return 0 00:13:43.922 Checking default timeout settings: 00:13:43.922 23:21:35 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:43.922 23:21:35 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:44.181 Making settings changes with rpc: 00:13:44.181 23:21:35 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:44.181 23:21:35 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:44.439 23:21:36 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:44.439 Check default vs. modified settings: 00:13:44.439 23:21:36 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67849 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67849 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:44.698 Setting action_on_timeout is changed as expected. 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67849 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67849 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:44.698 Setting timeout_us is changed as expected. 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67849 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.698 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67849 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:44.699 Setting timeout_admin_us is changed as expected. 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67849 /tmp/settings_modified_67849 00:13:44.699 23:21:36 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67872 00:13:44.699 23:21:36 -- common/autotest_common.sh@926 -- # '[' -z 67872 ']' 00:13:44.699 23:21:36 -- common/autotest_common.sh@930 -- # kill -0 67872 00:13:44.699 23:21:36 -- common/autotest_common.sh@931 -- # uname 00:13:44.699 23:21:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.699 23:21:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67872 00:13:44.699 23:21:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.699 killing process with pid 67872 00:13:44.699 23:21:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.699 23:21:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67872' 00:13:44.699 23:21:36 -- common/autotest_common.sh@945 -- # kill 67872 00:13:44.699 23:21:36 -- common/autotest_common.sh@950 -- # wait 67872 00:13:47.233 RPC TIMEOUT SETTING TEST PASSED. 00:13:47.233 23:21:38 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:47.233 00:13:47.233 real 0m4.793s 00:13:47.233 user 0m8.908s 00:13:47.233 sys 0m0.752s 00:13:47.233 23:21:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.233 ************************************ 00:13:47.233 END TEST nvme_rpc_timeouts 00:13:47.233 23:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 ************************************ 00:13:47.233 23:21:38 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:13:47.233 23:21:38 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:13:47.233 23:21:38 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.233 23:21:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:47.233 23:21:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.233 23:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 ************************************ 00:13:47.233 START TEST nvme_xnvme 00:13:47.233 ************************************ 00:13:47.233 23:21:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.233 * Looking for test storage... 00:13:47.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:47.233 23:21:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.233 23:21:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.233 23:21:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.233 23:21:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.233 23:21:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.233 23:21:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.233 23:21:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.233 23:21:38 -- paths/export.sh@5 -- # export PATH 00:13:47.233 23:21:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:47.233 23:21:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:47.233 23:21:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.233 23:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 ************************************ 00:13:47.233 START TEST xnvme_to_malloc_dd_copy 00:13:47.233 ************************************ 00:13:47.233 23:21:38 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:47.233 23:21:38 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:47.233 23:21:38 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:47.233 23:21:38 -- dd/common.sh@191 -- # return 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@18 -- # local io 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:47.233 23:21:38 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:47.233 23:21:38 -- dd/common.sh@31 -- # xtrace_disable 00:13:47.233 23:21:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.233 { 00:13:47.233 "subsystems": [ 00:13:47.233 { 00:13:47.233 "subsystem": "bdev", 00:13:47.233 "config": [ 00:13:47.233 { 00:13:47.233 "params": { 00:13:47.233 "block_size": 512, 00:13:47.233 "num_blocks": 2097152, 00:13:47.233 "name": "malloc0" 00:13:47.233 }, 00:13:47.233 "method": "bdev_malloc_create" 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "params": { 00:13:47.233 "io_mechanism": "libaio", 00:13:47.233 "filename": "/dev/nullb0", 00:13:47.233 "name": "null0" 00:13:47.233 }, 00:13:47.233 "method": "bdev_xnvme_create" 00:13:47.233 }, 00:13:47.233 { 00:13:47.233 "method": "bdev_wait_for_examine" 00:13:47.233 } 00:13:47.233 ] 00:13:47.233 } 00:13:47.233 ] 00:13:47.233 } 00:13:47.500 [2024-07-26 23:21:39.010739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:47.500 [2024-07-26 23:21:39.010833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68017 ] 00:13:47.500 [2024-07-26 23:21:39.177468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.796 [2024-07-26 23:21:39.383424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.123  Copying: 282/1024 [MB] (282 MBps) Copying: 567/1024 [MB] (285 MBps) Copying: 852/1024 [MB] (285 MBps) Copying: 1024/1024 [MB] (average 284 MBps) 00:13:57.123 00:13:57.123 23:21:48 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:57.123 23:21:48 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:57.123 23:21:48 -- dd/common.sh@31 -- # xtrace_disable 00:13:57.123 23:21:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.123 { 00:13:57.123 "subsystems": [ 00:13:57.123 { 00:13:57.123 "subsystem": "bdev", 00:13:57.123 "config": [ 00:13:57.123 { 00:13:57.123 "params": { 00:13:57.123 "block_size": 512, 00:13:57.123 "num_blocks": 2097152, 00:13:57.123 "name": "malloc0" 00:13:57.123 }, 00:13:57.123 "method": "bdev_malloc_create" 00:13:57.123 }, 00:13:57.123 { 00:13:57.123 "params": { 00:13:57.123 "io_mechanism": "libaio", 00:13:57.123 "filename": "/dev/nullb0", 00:13:57.123 "name": "null0" 00:13:57.123 }, 00:13:57.123 "method": "bdev_xnvme_create" 00:13:57.123 }, 00:13:57.123 { 00:13:57.123 "method": "bdev_wait_for_examine" 00:13:57.123 } 00:13:57.123 ] 00:13:57.123 } 00:13:57.123 ] 00:13:57.123 } 00:13:57.123 [2024-07-26 23:21:48.434911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:57.123 [2024-07-26 23:21:48.435051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68121 ] 00:13:57.123 [2024-07-26 23:21:48.603996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.123 [2024-07-26 23:21:48.815435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.725  Copying: 268/1024 [MB] (268 MBps) Copying: 533/1024 [MB] (264 MBps) Copying: 812/1024 [MB] (279 MBps) Copying: 1024/1024 [MB] (average 272 MBps) 00:14:06.725 00:14:06.725 23:21:58 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:06.725 23:21:58 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:06.725 23:21:58 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:06.725 23:21:58 -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:06.725 23:21:58 -- dd/common.sh@31 -- # xtrace_disable 00:14:06.725 23:21:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.725 { 00:14:06.725 "subsystems": [ 00:14:06.725 { 00:14:06.725 "subsystem": "bdev", 00:14:06.725 "config": [ 00:14:06.725 { 00:14:06.725 "params": { 00:14:06.725 "block_size": 512, 00:14:06.725 "num_blocks": 2097152, 00:14:06.725 "name": "malloc0" 00:14:06.725 }, 00:14:06.725 "method": "bdev_malloc_create" 00:14:06.725 }, 00:14:06.725 { 00:14:06.725 "params": { 00:14:06.725 "io_mechanism": "io_uring", 00:14:06.725 "filename": "/dev/nullb0", 00:14:06.725 "name": "null0" 00:14:06.725 }, 00:14:06.725 "method": "bdev_xnvme_create" 00:14:06.725 }, 00:14:06.725 { 00:14:06.725 "method": "bdev_wait_for_examine" 00:14:06.725 } 00:14:06.725 ] 00:14:06.725 } 00:14:06.725 ] 00:14:06.725 } 00:14:06.725 [2024-07-26 23:21:58.226034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:06.725 [2024-07-26 23:21:58.226134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68231 ] 00:14:06.725 [2024-07-26 23:21:58.394861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.983 [2024-07-26 23:21:58.607801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.587  Copying: 266/1024 [MB] (266 MBps) Copying: 533/1024 [MB] (267 MBps) Copying: 800/1024 [MB] (267 MBps) Copying: 1024/1024 [MB] (average 267 MBps) 00:14:16.587 00:14:16.587 23:22:07 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:16.587 23:22:07 -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:16.587 23:22:07 -- dd/common.sh@31 -- # xtrace_disable 00:14:16.587 23:22:07 -- common/autotest_common.sh@10 -- # set +x 00:14:16.587 { 00:14:16.587 "subsystems": [ 00:14:16.587 { 00:14:16.587 "subsystem": "bdev", 00:14:16.587 "config": [ 00:14:16.587 { 00:14:16.587 "params": { 00:14:16.587 "block_size": 512, 00:14:16.587 "num_blocks": 2097152, 00:14:16.587 "name": "malloc0" 00:14:16.587 }, 00:14:16.587 "method": "bdev_malloc_create" 00:14:16.587 }, 00:14:16.587 { 00:14:16.587 "params": { 00:14:16.587 "io_mechanism": "io_uring", 00:14:16.587 "filename": "/dev/nullb0", 00:14:16.587 "name": "null0" 00:14:16.587 }, 00:14:16.587 "method": "bdev_xnvme_create" 00:14:16.587 }, 00:14:16.587 { 00:14:16.587 "method": "bdev_wait_for_examine" 00:14:16.587 } 00:14:16.587 ] 00:14:16.587 } 00:14:16.587 ] 00:14:16.587 } 00:14:16.587 [2024-07-26 23:22:08.070562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:16.587 [2024-07-26 23:22:08.070662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68346 ] 00:14:16.587 [2024-07-26 23:22:08.240689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.846 [2024-07-26 23:22:08.446693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.507  Copying: 268/1024 [MB] (268 MBps) Copying: 538/1024 [MB] (269 MBps) Copying: 808/1024 [MB] (269 MBps) Copying: 1024/1024 [MB] (average 269 MBps) 00:14:26.507 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:26.507 23:22:17 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:26.507 00:14:26.507 real 0m38.867s 00:14:26.507 user 0m34.125s 00:14:26.507 sys 0m4.204s 00:14:26.507 23:22:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.507 23:22:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 ************************************ 00:14:26.507 END TEST xnvme_to_malloc_dd_copy 00:14:26.507 ************************************ 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:26.507 23:22:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:26.507 23:22:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.507 23:22:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 ************************************ 00:14:26.507 START TEST xnvme_bdevperf 00:14:26.507 ************************************ 00:14:26.507 23:22:17 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:26.507 23:22:17 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:26.507 23:22:17 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:26.507 23:22:17 -- dd/common.sh@191 -- # return 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@60 -- # local io 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:26.507 23:22:17 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:26.507 23:22:17 -- dd/common.sh@31 -- # xtrace_disable 00:14:26.507 23:22:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 { 00:14:26.507 "subsystems": [ 00:14:26.507 { 00:14:26.507 "subsystem": "bdev", 00:14:26.507 "config": [ 00:14:26.507 { 00:14:26.507 "params": { 00:14:26.507 "io_mechanism": "libaio", 00:14:26.507 "filename": "/dev/nullb0", 00:14:26.507 "name": "null0" 00:14:26.507 }, 00:14:26.507 "method": "bdev_xnvme_create" 00:14:26.507 }, 00:14:26.507 { 00:14:26.507 "method": "bdev_wait_for_examine" 00:14:26.507 } 00:14:26.507 ] 00:14:26.507 } 00:14:26.507 ] 00:14:26.507 } 00:14:26.507 [2024-07-26 23:22:17.950762] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:26.507 [2024-07-26 23:22:17.950895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68484 ] 00:14:26.507 [2024-07-26 23:22:18.123237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.766 [2024-07-26 23:22:18.328795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.025 Running I/O for 5 seconds... 00:14:32.296 00:14:32.296 Latency(us) 00:14:32.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.296 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:32.296 null0 : 5.00 140127.61 547.37 0.00 0.00 454.16 146.40 1868.70 00:14:32.296 =================================================================================================================== 00:14:32.296 Total : 140127.61 547.37 0.00 0.00 454.16 146.40 1868.70 00:14:33.233 23:22:24 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:33.233 23:22:24 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:33.233 23:22:24 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:33.233 23:22:24 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:33.233 23:22:24 -- dd/common.sh@31 -- # xtrace_disable 00:14:33.233 23:22:24 -- common/autotest_common.sh@10 -- # set +x 00:14:33.233 { 00:14:33.233 "subsystems": [ 00:14:33.233 { 00:14:33.233 "subsystem": "bdev", 00:14:33.233 "config": [ 00:14:33.233 { 00:14:33.233 "params": { 00:14:33.233 "io_mechanism": "io_uring", 00:14:33.233 "filename": "/dev/nullb0", 00:14:33.233 "name": "null0" 00:14:33.233 }, 00:14:33.233 "method": "bdev_xnvme_create" 00:14:33.233 }, 00:14:33.233 { 00:14:33.233 "method": "bdev_wait_for_examine" 00:14:33.233 } 00:14:33.233 ] 00:14:33.233 } 00:14:33.233 ] 00:14:33.233 } 00:14:33.233 [2024-07-26 23:22:24.976062] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:33.233 [2024-07-26 23:22:24.976169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68559 ] 00:14:33.493 [2024-07-26 23:22:25.149495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.751 [2024-07-26 23:22:25.355576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.010 Running I/O for 5 seconds... 00:14:39.282 00:14:39.282 Latency(us) 00:14:39.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.282 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.282 null0 : 5.00 218807.60 854.72 0.00 0.00 290.26 192.46 2908.32 00:14:39.282 =================================================================================================================== 00:14:39.282 Total : 218807.60 854.72 0.00 0.00 290.26 192.46 2908.32 00:14:40.661 23:22:32 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:40.661 23:22:32 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:40.661 00:14:40.661 real 0m14.235s 00:14:40.661 user 0m10.616s 00:14:40.661 sys 0m3.407s 00:14:40.661 23:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.661 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 ************************************ 00:14:40.661 END TEST xnvme_bdevperf 00:14:40.661 ************************************ 00:14:40.661 00:14:40.661 real 0m53.382s 00:14:40.661 user 0m44.844s 00:14:40.661 sys 0m7.782s 00:14:40.661 23:22:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:40.661 ************************************ 00:14:40.661 END TEST nvme_xnvme 00:14:40.661 ************************************ 00:14:40.661 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 23:22:32 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.661 23:22:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:40.661 23:22:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:40.661 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:40.661 ************************************ 00:14:40.661 START TEST blockdev_xnvme 00:14:40.661 ************************************ 00:14:40.661 23:22:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.661 * Looking for test storage... 00:14:40.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:40.661 23:22:32 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:40.661 23:22:32 -- bdev/nbd_common.sh@6 -- # set -e 00:14:40.661 23:22:32 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:40.661 23:22:32 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:40.661 23:22:32 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:40.661 23:22:32 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:40.661 23:22:32 -- bdev/blockdev.sh@18 -- # : 00:14:40.661 23:22:32 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:14:40.661 23:22:32 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:14:40.661 23:22:32 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:14:40.661 23:22:32 -- bdev/blockdev.sh@672 -- # uname -s 00:14:40.661 23:22:32 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:14:40.661 23:22:32 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:14:40.661 23:22:32 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:14:40.661 23:22:32 -- bdev/blockdev.sh@681 -- # crypto_device= 00:14:40.661 23:22:32 -- bdev/blockdev.sh@682 -- # dek= 00:14:40.661 23:22:32 -- bdev/blockdev.sh@683 -- # env_ctx= 00:14:40.661 23:22:32 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:14:40.661 23:22:32 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:14:40.661 23:22:32 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:14:40.661 23:22:32 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:14:40.661 23:22:32 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:14:40.661 23:22:32 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68708 00:14:40.661 23:22:32 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:40.661 23:22:32 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:40.661 23:22:32 -- bdev/blockdev.sh@47 -- # waitforlisten 68708 00:14:40.661 23:22:32 -- common/autotest_common.sh@819 -- # '[' -z 68708 ']' 00:14:40.661 23:22:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.661 23:22:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:40.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.661 23:22:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.661 23:22:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:40.661 23:22:32 -- common/autotest_common.sh@10 -- # set +x 00:14:40.920 [2024-07-26 23:22:32.457585] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:40.920 [2024-07-26 23:22:32.457728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68708 ] 00:14:40.920 [2024-07-26 23:22:32.631731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.178 [2024-07-26 23:22:32.893317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.179 [2024-07-26 23:22:32.893546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.555 23:22:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.555 23:22:33 -- common/autotest_common.sh@852 -- # return 0 00:14:42.555 23:22:33 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:14:42.555 23:22:33 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:14:42.555 23:22:33 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:14:42.555 23:22:33 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:14:42.555 23:22:33 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:43.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:43.123 Waiting for block devices as requested 00:14:43.123 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.382 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.382 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.382 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:14:48.660 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:14:48.660 23:22:40 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:14:48.660 23:22:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:14:48.660 23:22:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:14:48.660 23:22:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:48.660 23:22:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:48.660 23:22:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:48.660 23:22:40 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:14:48.660 23:22:40 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:14:48.660 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.660 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 23:22:40 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:48.660 nvme0n1 00:14:48.660 nvme1n1 00:14:48.660 nvme1n2 00:14:48.660 nvme1n3 00:14:48.660 nvme2n1 00:14:48.660 nvme3n1 00:14:48.660 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:14:48.660 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.660 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@738 -- # cat 00:14:48.660 23:22:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:14:48.660 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.660 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:14:48.660 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.660 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.660 23:22:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:48.661 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.661 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.661 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.661 23:22:40 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:14:48.661 23:22:40 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:14:48.661 23:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.661 23:22:40 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:14:48.661 23:22:40 -- common/autotest_common.sh@10 -- # set +x 00:14:48.921 23:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.921 23:22:40 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:14:48.921 23:22:40 -- bdev/blockdev.sh@747 -- # jq -r .name 00:14:48.921 23:22:40 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3fe74e85-d32e-4e97-84a9-a253f692a5cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3fe74e85-d32e-4e97-84a9-a253f692a5cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66a26e78-9911-480e-b6a8-8baaa6975b51"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66a26e78-9911-480e-b6a8-8baaa6975b51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "bb591f15-b56a-447d-b392-8f2f0f0de022"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb591f15-b56a-447d-b392-8f2f0f0de022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "af881825-cdb4-4cc6-9001-be6d171f6a84"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "af881825-cdb4-4cc6-9001-be6d171f6a84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "69b5117f-97a5-430b-accf-fe3df78ef490"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "69b5117f-97a5-430b-accf-fe3df78ef490",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37b5462b-e068-4344-a795-cfbba1f99096"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b5462b-e068-4344-a795-cfbba1f99096",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:48.921 23:22:40 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:14:48.921 23:22:40 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:14:48.921 23:22:40 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:14:48.921 23:22:40 -- bdev/blockdev.sh@752 -- # killprocess 68708 00:14:48.921 23:22:40 -- common/autotest_common.sh@926 -- # '[' -z 68708 ']' 00:14:48.921 23:22:40 -- common/autotest_common.sh@930 -- # kill -0 68708 00:14:48.921 23:22:40 -- common/autotest_common.sh@931 -- # uname 00:14:48.921 23:22:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:48.921 23:22:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68708 00:14:48.921 killing process with pid 68708 00:14:48.921 23:22:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:48.921 23:22:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:48.921 23:22:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68708' 00:14:48.921 23:22:40 -- common/autotest_common.sh@945 -- # kill 68708 00:14:48.921 23:22:40 -- common/autotest_common.sh@950 -- # wait 68708 00:14:51.458 23:22:42 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:51.458 23:22:42 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:51.458 23:22:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:14:51.458 23:22:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.458 23:22:42 -- common/autotest_common.sh@10 -- # set +x 00:14:51.458 ************************************ 00:14:51.458 START TEST bdev_hello_world 00:14:51.458 ************************************ 00:14:51.458 23:22:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:51.458 [2024-07-26 23:22:42.795062] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:51.458 [2024-07-26 23:22:42.795178] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69108 ] 00:14:51.458 [2024-07-26 23:22:42.966445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.458 [2024-07-26 23:22:43.173814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.025 [2024-07-26 23:22:43.653443] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:52.025 [2024-07-26 23:22:43.653504] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:52.025 [2024-07-26 23:22:43.653522] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:52.025 [2024-07-26 23:22:43.655418] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:52.025 [2024-07-26 23:22:43.655893] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:52.025 [2024-07-26 23:22:43.655935] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:52.025 [2024-07-26 23:22:43.656267] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:52.025 00:14:52.025 [2024-07-26 23:22:43.656305] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:53.405 00:14:53.405 real 0m2.143s 00:14:53.405 user 0m1.767s 00:14:53.405 sys 0m0.260s 00:14:53.405 23:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.405 23:22:44 -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 ************************************ 00:14:53.405 END TEST bdev_hello_world 00:14:53.405 ************************************ 00:14:53.405 23:22:44 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:14:53.405 23:22:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:53.405 23:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:53.405 23:22:44 -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 ************************************ 00:14:53.405 START TEST bdev_bounds 00:14:53.405 ************************************ 00:14:53.405 23:22:44 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:14:53.405 23:22:44 -- bdev/blockdev.sh@288 -- # bdevio_pid=69150 00:14:53.405 23:22:44 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:53.405 23:22:44 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:53.405 23:22:44 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69150' 00:14:53.405 Process bdevio pid: 69150 00:14:53.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.405 23:22:44 -- bdev/blockdev.sh@291 -- # waitforlisten 69150 00:14:53.405 23:22:44 -- common/autotest_common.sh@819 -- # '[' -z 69150 ']' 00:14:53.405 23:22:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.405 23:22:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:53.405 23:22:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.405 23:22:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:53.405 23:22:44 -- common/autotest_common.sh@10 -- # set +x 00:14:53.405 [2024-07-26 23:22:45.017904] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:53.405 [2024-07-26 23:22:45.018056] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69150 ] 00:14:53.663 [2024-07-26 23:22:45.192622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:53.663 [2024-07-26 23:22:45.401804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.663 [2024-07-26 23:22:45.402149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.663 [2024-07-26 23:22:45.402441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.038 23:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.038 23:22:46 -- common/autotest_common.sh@852 -- # return 0 00:14:55.038 23:22:46 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:55.038 I/O targets: 00:14:55.038 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:55.038 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.038 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.038 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.038 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:55.038 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:55.038 00:14:55.038 00:14:55.038 CUnit - A unit testing framework for C - Version 2.1-3 00:14:55.038 http://cunit.sourceforge.net/ 00:14:55.038 00:14:55.038 00:14:55.038 Suite: bdevio tests on: nvme3n1 00:14:55.038 Test: blockdev write read block ...passed 00:14:55.038 Test: blockdev write zeroes read block ...passed 00:14:55.038 Test: blockdev write zeroes read no split ...passed 00:14:55.038 Test: blockdev write zeroes read split ...passed 00:14:55.038 Test: blockdev write zeroes read split partial ...passed 00:14:55.038 Test: blockdev reset ...passed 00:14:55.038 Test: blockdev write read 8 blocks ...passed 00:14:55.038 Test: blockdev write read size > 128k ...passed 00:14:55.038 Test: blockdev write read invalid size ...passed 00:14:55.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.038 Test: blockdev write read max offset ...passed 00:14:55.038 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.038 Test: blockdev writev readv 8 blocks ...passed 00:14:55.038 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.038 Test: blockdev writev readv block ...passed 00:14:55.038 Test: blockdev writev readv size > 128k ...passed 00:14:55.038 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.038 Test: blockdev comparev and writev ...passed 00:14:55.038 Test: blockdev nvme passthru rw ...passed 00:14:55.038 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.038 Test: blockdev nvme admin passthru ...passed 00:14:55.038 Test: blockdev copy ...passed 00:14:55.038 Suite: bdevio tests on: nvme2n1 00:14:55.038 Test: blockdev write read block ...passed 00:14:55.038 Test: blockdev write zeroes read block ...passed 00:14:55.038 Test: blockdev write zeroes read no split ...passed 00:14:55.038 Test: blockdev write zeroes read split ...passed 00:14:55.038 Test: blockdev write zeroes read split partial ...passed 00:14:55.038 Test: blockdev reset ...passed 00:14:55.038 Test: blockdev write read 8 blocks ...passed 00:14:55.038 Test: blockdev write read size > 128k ...passed 00:14:55.038 Test: blockdev write read invalid size ...passed 00:14:55.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.038 Test: blockdev write read max offset ...passed 00:14:55.039 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.039 Test: blockdev writev readv 8 blocks ...passed 00:14:55.039 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.039 Test: blockdev writev readv block ...passed 00:14:55.039 Test: blockdev writev readv size > 128k ...passed 00:14:55.039 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.039 Test: blockdev comparev and writev ...passed 00:14:55.039 Test: blockdev nvme passthru rw ...passed 00:14:55.039 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.039 Test: blockdev nvme admin passthru ...passed 00:14:55.039 Test: blockdev copy ...passed 00:14:55.039 Suite: bdevio tests on: nvme1n3 00:14:55.039 Test: blockdev write read block ...passed 00:14:55.039 Test: blockdev write zeroes read block ...passed 00:14:55.039 Test: blockdev write zeroes read no split ...passed 00:14:55.039 Test: blockdev write zeroes read split ...passed 00:14:55.300 Test: blockdev write zeroes read split partial ...passed 00:14:55.300 Test: blockdev reset ...passed 00:14:55.300 Test: blockdev write read 8 blocks ...passed 00:14:55.300 Test: blockdev write read size > 128k ...passed 00:14:55.300 Test: blockdev write read invalid size ...passed 00:14:55.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.300 Test: blockdev write read max offset ...passed 00:14:55.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.300 Test: blockdev writev readv 8 blocks ...passed 00:14:55.300 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.300 Test: blockdev writev readv block ...passed 00:14:55.300 Test: blockdev writev readv size > 128k ...passed 00:14:55.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.300 Test: blockdev comparev and writev ...passed 00:14:55.300 Test: blockdev nvme passthru rw ...passed 00:14:55.300 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.300 Test: blockdev nvme admin passthru ...passed 00:14:55.300 Test: blockdev copy ...passed 00:14:55.300 Suite: bdevio tests on: nvme1n2 00:14:55.300 Test: blockdev write read block ...passed 00:14:55.300 Test: blockdev write zeroes read block ...passed 00:14:55.300 Test: blockdev write zeroes read no split ...passed 00:14:55.300 Test: blockdev write zeroes read split ...passed 00:14:55.300 Test: blockdev write zeroes read split partial ...passed 00:14:55.300 Test: blockdev reset ...passed 00:14:55.300 Test: blockdev write read 8 blocks ...passed 00:14:55.300 Test: blockdev write read size > 128k ...passed 00:14:55.300 Test: blockdev write read invalid size ...passed 00:14:55.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.300 Test: blockdev write read max offset ...passed 00:14:55.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.300 Test: blockdev writev readv 8 blocks ...passed 00:14:55.300 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.300 Test: blockdev writev readv block ...passed 00:14:55.300 Test: blockdev writev readv size > 128k ...passed 00:14:55.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.300 Test: blockdev comparev and writev ...passed 00:14:55.300 Test: blockdev nvme passthru rw ...passed 00:14:55.300 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.300 Test: blockdev nvme admin passthru ...passed 00:14:55.300 Test: blockdev copy ...passed 00:14:55.300 Suite: bdevio tests on: nvme1n1 00:14:55.300 Test: blockdev write read block ...passed 00:14:55.300 Test: blockdev write zeroes read block ...passed 00:14:55.300 Test: blockdev write zeroes read no split ...passed 00:14:55.300 Test: blockdev write zeroes read split ...passed 00:14:55.300 Test: blockdev write zeroes read split partial ...passed 00:14:55.300 Test: blockdev reset ...passed 00:14:55.300 Test: blockdev write read 8 blocks ...passed 00:14:55.300 Test: blockdev write read size > 128k ...passed 00:14:55.300 Test: blockdev write read invalid size ...passed 00:14:55.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.300 Test: blockdev write read max offset ...passed 00:14:55.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.300 Test: blockdev writev readv 8 blocks ...passed 00:14:55.300 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.300 Test: blockdev writev readv block ...passed 00:14:55.300 Test: blockdev writev readv size > 128k ...passed 00:14:55.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.300 Test: blockdev comparev and writev ...passed 00:14:55.300 Test: blockdev nvme passthru rw ...passed 00:14:55.300 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.300 Test: blockdev nvme admin passthru ...passed 00:14:55.300 Test: blockdev copy ...passed 00:14:55.300 Suite: bdevio tests on: nvme0n1 00:14:55.300 Test: blockdev write read block ...passed 00:14:55.300 Test: blockdev write zeroes read block ...passed 00:14:55.300 Test: blockdev write zeroes read no split ...passed 00:14:55.300 Test: blockdev write zeroes read split ...passed 00:14:55.300 Test: blockdev write zeroes read split partial ...passed 00:14:55.300 Test: blockdev reset ...passed 00:14:55.300 Test: blockdev write read 8 blocks ...passed 00:14:55.300 Test: blockdev write read size > 128k ...passed 00:14:55.300 Test: blockdev write read invalid size ...passed 00:14:55.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.300 Test: blockdev write read max offset ...passed 00:14:55.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.300 Test: blockdev writev readv 8 blocks ...passed 00:14:55.300 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.300 Test: blockdev writev readv block ...passed 00:14:55.300 Test: blockdev writev readv size > 128k ...passed 00:14:55.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.300 Test: blockdev comparev and writev ...passed 00:14:55.300 Test: blockdev nvme passthru rw ...passed 00:14:55.300 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.300 Test: blockdev nvme admin passthru ...passed 00:14:55.300 Test: blockdev copy ...passed 00:14:55.300 00:14:55.300 Run Summary: Type Total Ran Passed Failed Inactive 00:14:55.300 suites 6 6 n/a 0 0 00:14:55.300 tests 138 138 138 0 0 00:14:55.300 asserts 780 780 780 0 n/a 00:14:55.300 00:14:55.300 Elapsed time = 1.384 seconds 00:14:55.300 0 00:14:55.601 23:22:47 -- bdev/blockdev.sh@293 -- # killprocess 69150 00:14:55.601 23:22:47 -- common/autotest_common.sh@926 -- # '[' -z 69150 ']' 00:14:55.601 23:22:47 -- common/autotest_common.sh@930 -- # kill -0 69150 00:14:55.601 23:22:47 -- common/autotest_common.sh@931 -- # uname 00:14:55.601 23:22:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.601 23:22:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69150 00:14:55.601 23:22:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:55.601 23:22:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:55.601 23:22:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69150' 00:14:55.601 killing process with pid 69150 00:14:55.601 23:22:47 -- common/autotest_common.sh@945 -- # kill 69150 00:14:55.601 23:22:47 -- common/autotest_common.sh@950 -- # wait 69150 00:14:56.558 23:22:48 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:14:56.558 00:14:56.558 real 0m3.337s 00:14:56.558 user 0m8.135s 00:14:56.558 sys 0m0.432s 00:14:56.558 23:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.558 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.558 ************************************ 00:14:56.558 END TEST bdev_bounds 00:14:56.558 ************************************ 00:14:56.818 23:22:48 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:56.818 23:22:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:56.818 23:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.818 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.818 ************************************ 00:14:56.818 START TEST bdev_nbd 00:14:56.818 ************************************ 00:14:56.818 23:22:48 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:56.818 23:22:48 -- bdev/blockdev.sh@298 -- # uname -s 00:14:56.818 23:22:48 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:14:56.818 23:22:48 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:56.818 23:22:48 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:56.818 23:22:48 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:56.818 23:22:48 -- bdev/blockdev.sh@302 -- # local bdev_all 00:14:56.818 23:22:48 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:14:56.818 23:22:48 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:14:56.818 23:22:48 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:56.818 23:22:48 -- bdev/blockdev.sh@309 -- # local nbd_all 00:14:56.818 23:22:48 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:14:56.818 23:22:48 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:56.818 23:22:48 -- bdev/blockdev.sh@312 -- # local nbd_list 00:14:56.818 23:22:48 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:56.818 23:22:48 -- bdev/blockdev.sh@313 -- # local bdev_list 00:14:56.818 23:22:48 -- bdev/blockdev.sh@316 -- # nbd_pid=69223 00:14:56.818 23:22:48 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:56.818 23:22:48 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:56.818 23:22:48 -- bdev/blockdev.sh@318 -- # waitforlisten 69223 /var/tmp/spdk-nbd.sock 00:14:56.818 23:22:48 -- common/autotest_common.sh@819 -- # '[' -z 69223 ']' 00:14:56.818 23:22:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:56.818 23:22:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:56.818 23:22:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:56.818 23:22:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.818 23:22:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.818 [2024-07-26 23:22:48.443671] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:56.818 [2024-07-26 23:22:48.443822] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.077 [2024-07-26 23:22:48.621550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.077 [2024-07-26 23:22:48.830438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.454 23:22:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.454 23:22:49 -- common/autotest_common.sh@852 -- # return 0 00:14:58.454 23:22:49 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@24 -- # local i 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.454 23:22:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:58.454 23:22:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:58.454 23:22:50 -- common/autotest_common.sh@857 -- # local i 00:14:58.454 23:22:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:58.454 23:22:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:58.454 23:22:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:58.454 23:22:50 -- common/autotest_common.sh@861 -- # break 00:14:58.454 23:22:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:58.454 23:22:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:58.454 23:22:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.454 1+0 records in 00:14:58.454 1+0 records out 00:14:58.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658257 s, 6.2 MB/s 00:14:58.454 23:22:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.454 23:22:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:58.454 23:22:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.454 23:22:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:58.454 23:22:50 -- common/autotest_common.sh@877 -- # return 0 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.454 23:22:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:58.713 23:22:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:58.713 23:22:50 -- common/autotest_common.sh@857 -- # local i 00:14:58.713 23:22:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:58.713 23:22:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:58.713 23:22:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:58.713 23:22:50 -- common/autotest_common.sh@861 -- # break 00:14:58.713 23:22:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:58.713 23:22:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:58.713 23:22:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.713 1+0 records in 00:14:58.713 1+0 records out 00:14:58.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663169 s, 6.2 MB/s 00:14:58.713 23:22:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.713 23:22:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:58.713 23:22:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.713 23:22:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:58.713 23:22:50 -- common/autotest_common.sh@877 -- # return 0 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.713 23:22:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:58.972 23:22:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:58.972 23:22:50 -- common/autotest_common.sh@857 -- # local i 00:14:58.972 23:22:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:58.972 23:22:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:58.972 23:22:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:58.972 23:22:50 -- common/autotest_common.sh@861 -- # break 00:14:58.972 23:22:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:58.972 23:22:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:58.972 23:22:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.972 1+0 records in 00:14:58.972 1+0 records out 00:14:58.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599571 s, 6.8 MB/s 00:14:58.972 23:22:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.972 23:22:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:58.972 23:22:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.972 23:22:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:58.972 23:22:50 -- common/autotest_common.sh@877 -- # return 0 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.972 23:22:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:14:59.231 23:22:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:59.231 23:22:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:59.231 23:22:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:59.231 23:22:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:14:59.231 23:22:50 -- common/autotest_common.sh@857 -- # local i 00:14:59.231 23:22:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.232 23:22:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.232 23:22:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:14:59.232 23:22:50 -- common/autotest_common.sh@861 -- # break 00:14:59.232 23:22:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.232 23:22:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.232 23:22:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.232 1+0 records in 00:14:59.232 1+0 records out 00:14:59.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664279 s, 6.2 MB/s 00:14:59.232 23:22:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.232 23:22:50 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.232 23:22:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.232 23:22:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.232 23:22:50 -- common/autotest_common.sh@877 -- # return 0 00:14:59.232 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.232 23:22:50 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.232 23:22:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:59.490 23:22:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:14:59.490 23:22:51 -- common/autotest_common.sh@857 -- # local i 00:14:59.490 23:22:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.490 23:22:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.490 23:22:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:14:59.490 23:22:51 -- common/autotest_common.sh@861 -- # break 00:14:59.490 23:22:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.490 23:22:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.490 23:22:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.490 1+0 records in 00:14:59.490 1+0 records out 00:14:59.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769493 s, 5.3 MB/s 00:14:59.490 23:22:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.490 23:22:51 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.490 23:22:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.490 23:22:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.490 23:22:51 -- common/autotest_common.sh@877 -- # return 0 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:59.490 23:22:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:59.749 23:22:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:14:59.749 23:22:51 -- common/autotest_common.sh@857 -- # local i 00:14:59.749 23:22:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.749 23:22:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.749 23:22:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:14:59.749 23:22:51 -- common/autotest_common.sh@861 -- # break 00:14:59.749 23:22:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.749 23:22:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.749 23:22:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.749 1+0 records in 00:14:59.749 1+0 records out 00:14:59.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862944 s, 4.7 MB/s 00:14:59.749 23:22:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.749 23:22:51 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.749 23:22:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.749 23:22:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.749 23:22:51 -- common/autotest_common.sh@877 -- # return 0 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd0", 00:14:59.749 "bdev_name": "nvme0n1" 00:14:59.749 }, 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd1", 00:14:59.749 "bdev_name": "nvme1n1" 00:14:59.749 }, 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd2", 00:14:59.749 "bdev_name": "nvme1n2" 00:14:59.749 }, 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd3", 00:14:59.749 "bdev_name": "nvme1n3" 00:14:59.749 }, 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd4", 00:14:59.749 "bdev_name": "nvme2n1" 00:14:59.749 }, 00:14:59.749 { 00:14:59.749 "nbd_device": "/dev/nbd5", 00:14:59.749 "bdev_name": "nvme3n1" 00:14:59.749 } 00:14:59.749 ]' 00:14:59.749 23:22:51 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:59.750 23:22:51 -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd0", 00:14:59.750 "bdev_name": "nvme0n1" 00:14:59.750 }, 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd1", 00:14:59.750 "bdev_name": "nvme1n1" 00:14:59.750 }, 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd2", 00:14:59.750 "bdev_name": "nvme1n2" 00:14:59.750 }, 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd3", 00:14:59.750 "bdev_name": "nvme1n3" 00:14:59.750 }, 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd4", 00:14:59.750 "bdev_name": "nvme2n1" 00:14:59.750 }, 00:14:59.750 { 00:14:59.750 "nbd_device": "/dev/nbd5", 00:14:59.750 "bdev_name": "nvme3n1" 00:14:59.750 } 00:14:59.750 ]' 00:14:59.750 23:22:51 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@51 -- # local i 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@41 -- # break 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.009 23:22:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@41 -- # break 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.268 23:22:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@41 -- # break 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@41 -- # break 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.528 23:22:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@41 -- # break 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.787 23:22:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@41 -- # break 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.047 23:22:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@65 -- # true 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@65 -- # count=0 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@122 -- # count=0 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:01.305 23:22:52 -- bdev/nbd_common.sh@127 -- # return 0 00:15:01.306 23:22:52 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@12 -- # local i 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.306 23:22:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:01.577 /dev/nbd0 00:15:01.577 23:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.577 23:22:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.577 23:22:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:01.577 23:22:53 -- common/autotest_common.sh@857 -- # local i 00:15:01.577 23:22:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:01.577 23:22:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:01.577 23:22:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:01.577 23:22:53 -- common/autotest_common.sh@861 -- # break 00:15:01.577 23:22:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:01.577 23:22:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:01.577 23:22:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.577 1+0 records in 00:15:01.577 1+0 records out 00:15:01.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069559 s, 5.9 MB/s 00:15:01.577 23:22:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.577 23:22:53 -- common/autotest_common.sh@874 -- # size=4096 00:15:01.577 23:22:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.577 23:22:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:01.577 23:22:53 -- common/autotest_common.sh@877 -- # return 0 00:15:01.577 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.577 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.577 23:22:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:01.577 /dev/nbd1 00:15:01.578 23:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:01.578 23:22:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:01.578 23:22:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:01.578 23:22:53 -- common/autotest_common.sh@857 -- # local i 00:15:01.578 23:22:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:01.578 23:22:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:01.578 23:22:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:01.578 23:22:53 -- common/autotest_common.sh@861 -- # break 00:15:01.578 23:22:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:01.578 23:22:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:01.578 23:22:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.838 1+0 records in 00:15:01.838 1+0 records out 00:15:01.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626657 s, 6.5 MB/s 00:15:01.838 23:22:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.838 23:22:53 -- common/autotest_common.sh@874 -- # size=4096 00:15:01.838 23:22:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.838 23:22:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:01.838 23:22:53 -- common/autotest_common.sh@877 -- # return 0 00:15:01.838 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.838 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.838 23:22:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:15:01.838 /dev/nbd10 00:15:01.838 23:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:01.838 23:22:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:01.838 23:22:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:15:01.838 23:22:53 -- common/autotest_common.sh@857 -- # local i 00:15:01.838 23:22:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:01.838 23:22:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:01.838 23:22:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:15:01.838 23:22:53 -- common/autotest_common.sh@861 -- # break 00:15:01.838 23:22:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:01.838 23:22:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:01.838 23:22:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.838 1+0 records in 00:15:01.838 1+0 records out 00:15:01.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901363 s, 4.5 MB/s 00:15:01.838 23:22:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.838 23:22:53 -- common/autotest_common.sh@874 -- # size=4096 00:15:01.838 23:22:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.839 23:22:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:01.839 23:22:53 -- common/autotest_common.sh@877 -- # return 0 00:15:01.839 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.839 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.839 23:22:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:15:02.095 /dev/nbd11 00:15:02.095 23:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:02.095 23:22:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:02.095 23:22:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:15:02.095 23:22:53 -- common/autotest_common.sh@857 -- # local i 00:15:02.095 23:22:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.095 23:22:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.095 23:22:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:15:02.095 23:22:53 -- common/autotest_common.sh@861 -- # break 00:15:02.095 23:22:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.096 23:22:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.096 23:22:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.096 1+0 records in 00:15:02.096 1+0 records out 00:15:02.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707807 s, 5.8 MB/s 00:15:02.096 23:22:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.096 23:22:53 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.096 23:22:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.096 23:22:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.096 23:22:53 -- common/autotest_common.sh@877 -- # return 0 00:15:02.096 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.096 23:22:53 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.096 23:22:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:02.354 /dev/nbd12 00:15:02.354 23:22:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:02.354 23:22:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:02.354 23:22:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:15:02.354 23:22:54 -- common/autotest_common.sh@857 -- # local i 00:15:02.354 23:22:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.354 23:22:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.354 23:22:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:15:02.354 23:22:54 -- common/autotest_common.sh@861 -- # break 00:15:02.354 23:22:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.354 23:22:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.354 23:22:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.354 1+0 records in 00:15:02.354 1+0 records out 00:15:02.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825242 s, 5.0 MB/s 00:15:02.354 23:22:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.354 23:22:54 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.354 23:22:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.354 23:22:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.354 23:22:54 -- common/autotest_common.sh@877 -- # return 0 00:15:02.354 23:22:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.354 23:22:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.354 23:22:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:02.613 /dev/nbd13 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:02.613 23:22:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:15:02.613 23:22:54 -- common/autotest_common.sh@857 -- # local i 00:15:02.613 23:22:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.613 23:22:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.613 23:22:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:15:02.613 23:22:54 -- common/autotest_common.sh@861 -- # break 00:15:02.613 23:22:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.613 23:22:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.613 23:22:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.613 1+0 records in 00:15:02.613 1+0 records out 00:15:02.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923156 s, 4.4 MB/s 00:15:02.613 23:22:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.613 23:22:54 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.613 23:22:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.613 23:22:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.613 23:22:54 -- common/autotest_common.sh@877 -- # return 0 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.613 23:22:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd0", 00:15:02.872 "bdev_name": "nvme0n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd1", 00:15:02.872 "bdev_name": "nvme1n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd10", 00:15:02.872 "bdev_name": "nvme1n2" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd11", 00:15:02.872 "bdev_name": "nvme1n3" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd12", 00:15:02.872 "bdev_name": "nvme2n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd13", 00:15:02.872 "bdev_name": "nvme3n1" 00:15:02.872 } 00:15:02.872 ]' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd0", 00:15:02.872 "bdev_name": "nvme0n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd1", 00:15:02.872 "bdev_name": "nvme1n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd10", 00:15:02.872 "bdev_name": "nvme1n2" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd11", 00:15:02.872 "bdev_name": "nvme1n3" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd12", 00:15:02.872 "bdev_name": "nvme2n1" 00:15:02.872 }, 00:15:02.872 { 00:15:02.872 "nbd_device": "/dev/nbd13", 00:15:02.872 "bdev_name": "nvme3n1" 00:15:02.872 } 00:15:02.872 ]' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:02.872 /dev/nbd1 00:15:02.872 /dev/nbd10 00:15:02.872 /dev/nbd11 00:15:02.872 /dev/nbd12 00:15:02.872 /dev/nbd13' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:02.872 /dev/nbd1 00:15:02.872 /dev/nbd10 00:15:02.872 /dev/nbd11 00:15:02.872 /dev/nbd12 00:15:02.872 /dev/nbd13' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@65 -- # count=6 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@66 -- # echo 6 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@95 -- # count=6 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:02.872 256+0 records in 00:15:02.872 256+0 records out 00:15:02.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117967 s, 88.9 MB/s 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.872 23:22:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:03.132 256+0 records in 00:15:03.132 256+0 records out 00:15:03.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118769 s, 8.8 MB/s 00:15:03.132 23:22:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.132 23:22:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:03.132 256+0 records in 00:15:03.132 256+0 records out 00:15:03.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130679 s, 8.0 MB/s 00:15:03.132 23:22:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.132 23:22:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:03.391 256+0 records in 00:15:03.391 256+0 records out 00:15:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124558 s, 8.4 MB/s 00:15:03.391 23:22:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.391 23:22:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:03.391 256+0 records in 00:15:03.391 256+0 records out 00:15:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1252 s, 8.4 MB/s 00:15:03.391 23:22:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.391 23:22:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:03.651 256+0 records in 00:15:03.651 256+0 records out 00:15:03.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154397 s, 6.8 MB/s 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:03.651 256+0 records in 00:15:03.651 256+0 records out 00:15:03.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12715 s, 8.2 MB/s 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:03.651 23:22:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@51 -- # local i 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@41 -- # break 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.910 23:22:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@41 -- # break 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.169 23:22:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@41 -- # break 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.428 23:22:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@41 -- # break 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.428 23:22:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@41 -- # break 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.687 23:22:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@41 -- # break 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.946 23:22:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:05.205 23:22:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:05.205 23:22:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:05.205 23:22:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@65 -- # true 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@65 -- # count=0 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@104 -- # count=0 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@109 -- # return 0 00:15:05.206 23:22:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:05.206 23:22:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:05.464 malloc_lvol_verify 00:15:05.464 23:22:57 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:05.464 cd01e292-8c06-40ca-a840-0abc48d7df51 00:15:05.464 23:22:57 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:05.723 4711546c-b1a3-41b3-bfb4-bb89b00e7fe4 00:15:05.723 23:22:57 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:05.982 /dev/nbd0 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:05.982 mke2fs 1.46.5 (30-Dec-2021) 00:15:05.982 Discarding device blocks: 0/4096 done 00:15:05.982 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:05.982 00:15:05.982 Allocating group tables: 0/1 done 00:15:05.982 Writing inode tables: 0/1 done 00:15:05.982 Creating journal (1024 blocks): done 00:15:05.982 Writing superblocks and filesystem accounting information: 0/1 done 00:15:05.982 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@51 -- # local i 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.982 23:22:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@41 -- # break 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:06.241 23:22:57 -- bdev/nbd_common.sh@147 -- # return 0 00:15:06.241 23:22:57 -- bdev/blockdev.sh@324 -- # killprocess 69223 00:15:06.241 23:22:57 -- common/autotest_common.sh@926 -- # '[' -z 69223 ']' 00:15:06.241 23:22:57 -- common/autotest_common.sh@930 -- # kill -0 69223 00:15:06.241 23:22:57 -- common/autotest_common.sh@931 -- # uname 00:15:06.241 23:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:06.241 23:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69223 00:15:06.241 23:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:06.241 23:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:06.241 killing process with pid 69223 00:15:06.241 23:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69223' 00:15:06.241 23:22:57 -- common/autotest_common.sh@945 -- # kill 69223 00:15:06.241 23:22:57 -- common/autotest_common.sh@950 -- # wait 69223 00:15:07.626 23:22:59 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:15:07.626 00:15:07.626 real 0m10.676s 00:15:07.626 user 0m13.363s 00:15:07.626 sys 0m4.276s 00:15:07.626 23:22:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.626 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.626 ************************************ 00:15:07.626 END TEST bdev_nbd 00:15:07.626 ************************************ 00:15:07.626 23:22:59 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:15:07.626 23:22:59 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.626 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.626 ************************************ 00:15:07.626 START TEST bdev_fio 00:15:07.626 ************************************ 00:15:07.626 23:22:59 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@329 -- # local env_context 00:15:07.626 23:22:59 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:07.626 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:07.626 23:22:59 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:07.626 23:22:59 -- bdev/blockdev.sh@337 -- # echo '' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:15:07.626 23:22:59 -- bdev/blockdev.sh@337 -- # env_context= 00:15:07.626 23:22:59 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.626 23:22:59 -- common/autotest_common.sh@1260 -- # local workload=verify 00:15:07.626 23:22:59 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:15:07.626 23:22:59 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:07.626 23:22:59 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:07.626 23:22:59 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.626 23:22:59 -- common/autotest_common.sh@1280 -- # cat 00:15:07.626 23:22:59 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1293 -- # cat 00:15:07.626 23:22:59 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:15:07.626 23:22:59 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:07.626 23:22:59 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:15:07.626 23:22:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:07.626 23:22:59 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:15:07.626 23:22:59 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:07.626 23:22:59 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:07.626 23:22:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:15:07.626 23:22:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.626 23:22:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.626 ************************************ 00:15:07.626 START TEST bdev_fio_rw_verify 00:15:07.626 ************************************ 00:15:07.626 23:22:59 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:07.627 23:22:59 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:07.627 23:22:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:07.627 23:22:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.627 23:22:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:07.627 23:22:59 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.627 23:22:59 -- common/autotest_common.sh@1320 -- # shift 00:15:07.627 23:22:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:07.627 23:22:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.627 23:22:59 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.627 23:22:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:07.627 23:22:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:07.627 23:22:59 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.627 23:22:59 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.627 23:22:59 -- common/autotest_common.sh@1326 -- # break 00:15:07.627 23:22:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.627 23:22:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:07.886 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:07.886 fio-3.35 00:15:07.886 Starting 6 threads 00:15:20.090 00:15:20.090 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69635: Fri Jul 26 23:23:10 2024 00:15:20.090 read: IOPS=34.0k, BW=133MiB/s (139MB/s)(1329MiB/10001msec) 00:15:20.090 slat (usec): min=2, max=974, avg= 7.05, stdev= 5.66 00:15:20.090 clat (usec): min=86, max=4912, avg=538.96, stdev=228.23 00:15:20.090 lat (usec): min=89, max=4922, avg=546.01, stdev=229.29 00:15:20.090 clat percentiles (usec): 00:15:20.090 | 50.000th=[ 545], 99.000th=[ 1172], 99.900th=[ 1795], 99.990th=[ 3851], 00:15:20.090 | 99.999th=[ 4883] 00:15:20.090 write: IOPS=34.4k, BW=134MiB/s (141MB/s)(1342MiB/10001msec); 0 zone resets 00:15:20.090 slat (usec): min=10, max=4224, avg=23.53, stdev=31.87 00:15:20.090 clat (usec): min=83, max=3746, avg=632.35, stdev=244.54 00:15:20.090 lat (usec): min=100, max=4626, avg=655.89, stdev=248.97 00:15:20.090 clat percentiles (usec): 00:15:20.090 | 50.000th=[ 627], 99.000th=[ 1369], 99.900th=[ 1975], 99.990th=[ 2868], 00:15:20.090 | 99.999th=[ 3687] 00:15:20.090 bw ( KiB/s): min=107597, max=168744, per=100.00%, avg=137664.05, stdev=2761.66, samples=114 00:15:20.090 iops : min=26899, max=42186, avg=34415.53, stdev=690.47, samples=114 00:15:20.090 lat (usec) : 100=0.01%, 250=7.20%, 500=28.38%, 750=44.66%, 1000=15.37% 00:15:20.090 lat (msec) : 2=4.30%, 4=0.08%, 10=0.01% 00:15:20.090 cpu : usr=58.20%, sys=27.73%, ctx=8233, majf=0, minf=29917 00:15:20.090 IO depths : 1=12.1%, 2=24.5%, 4=50.5%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.090 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.090 issued rwts: total=340307,343667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.090 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:20.090 00:15:20.090 Run status group 0 (all jobs): 00:15:20.090 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=1329MiB (1394MB), run=10001-10001msec 00:15:20.090 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=1342MiB (1408MB), run=10001-10001msec 00:15:20.090 ----------------------------------------------------- 00:15:20.090 Suppressions used: 00:15:20.090 count bytes template 00:15:20.090 6 48 /usr/src/fio/parse.c 00:15:20.090 3111 298656 /usr/src/fio/iolog.c 00:15:20.090 1 8 libtcmalloc_minimal.so 00:15:20.090 1 904 libcrypto.so 00:15:20.090 ----------------------------------------------------- 00:15:20.090 00:15:20.090 00:15:20.090 real 0m12.492s 00:15:20.090 user 0m37.024s 00:15:20.090 sys 0m17.127s 00:15:20.090 23:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.090 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:15:20.090 ************************************ 00:15:20.090 END TEST bdev_fio_rw_verify 00:15:20.090 ************************************ 00:15:20.090 23:23:11 -- bdev/blockdev.sh@348 -- # rm -f 00:15:20.090 23:23:11 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.091 23:23:11 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.091 23:23:11 -- common/autotest_common.sh@1260 -- # local workload=trim 00:15:20.091 23:23:11 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:15:20.091 23:23:11 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:20.091 23:23:11 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:20.091 23:23:11 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.091 23:23:11 -- common/autotest_common.sh@1280 -- # cat 00:15:20.091 23:23:11 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:15:20.091 23:23:11 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:15:20.091 23:23:11 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:20.091 23:23:11 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "3fe74e85-d32e-4e97-84a9-a253f692a5cd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3fe74e85-d32e-4e97-84a9-a253f692a5cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "66a26e78-9911-480e-b6a8-8baaa6975b51"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66a26e78-9911-480e-b6a8-8baaa6975b51",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "bb591f15-b56a-447d-b392-8f2f0f0de022"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb591f15-b56a-447d-b392-8f2f0f0de022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "af881825-cdb4-4cc6-9001-be6d171f6a84"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "af881825-cdb4-4cc6-9001-be6d171f6a84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "69b5117f-97a5-430b-accf-fe3df78ef490"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "69b5117f-97a5-430b-accf-fe3df78ef490",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "37b5462b-e068-4344-a795-cfbba1f99096"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b5462b-e068-4344-a795-cfbba1f99096",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:20.091 23:23:11 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:15:20.091 23:23:11 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.091 /home/vagrant/spdk_repo/spdk 00:15:20.091 23:23:11 -- bdev/blockdev.sh@360 -- # popd 00:15:20.091 23:23:11 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:15:20.091 23:23:11 -- bdev/blockdev.sh@362 -- # return 0 00:15:20.091 00:15:20.091 real 0m12.716s 00:15:20.091 user 0m37.128s 00:15:20.091 sys 0m17.253s 00:15:20.091 23:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.091 ************************************ 00:15:20.091 END TEST bdev_fio 00:15:20.091 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:15:20.091 ************************************ 00:15:20.350 23:23:11 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:20.350 23:23:11 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:20.350 23:23:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:20.350 23:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:20.350 23:23:11 -- common/autotest_common.sh@10 -- # set +x 00:15:20.350 ************************************ 00:15:20.350 START TEST bdev_verify 00:15:20.350 ************************************ 00:15:20.350 23:23:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:20.350 [2024-07-26 23:23:11.967036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:20.350 [2024-07-26 23:23:11.967143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:15:20.610 [2024-07-26 23:23:12.136964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:20.868 [2024-07-26 23:23:12.398872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.868 [2024-07-26 23:23:12.398900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.436 Running I/O for 5 seconds... 00:15:26.708 00:15:26.708 Latency(us) 00:15:26.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.708 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.708 Verification LBA range: start 0x0 length 0x20000 00:15:26.709 nvme0n1 : 5.07 1803.54 7.05 0.00 0.00 70844.20 6711.52 79590.71 00:15:26.709 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x20000 length 0x20000 00:15:26.709 nvme0n1 : 5.10 1459.79 5.70 0.00 0.00 87203.27 26740.79 128861.15 00:15:26.709 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x0 length 0x80000 00:15:26.709 nvme1n1 : 5.08 1724.39 6.74 0.00 0.00 73920.29 5895.61 119596.62 00:15:26.709 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x80000 length 0x80000 00:15:26.709 nvme1n1 : 5.09 1395.11 5.45 0.00 0.00 91343.25 5027.06 125492.23 00:15:26.709 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x0 length 0x80000 00:15:26.709 nvme1n2 : 5.08 1764.09 6.89 0.00 0.00 72200.55 14423.18 109489.86 00:15:26.709 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x80000 length 0x80000 00:15:26.709 nvme1n2 : 5.11 1398.09 5.46 0.00 0.00 90847.35 29056.93 124650.00 00:15:26.709 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x0 length 0x80000 00:15:26.709 nvme1n3 : 5.08 1652.69 6.46 0.00 0.00 76965.87 16844.59 108647.63 00:15:26.709 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x80000 length 0x80000 00:15:26.709 nvme1n3 : 5.10 1385.54 5.41 0.00 0.00 91784.73 6843.12 127176.69 00:15:26.709 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x0 length 0xbd0bd 00:15:26.709 nvme2n1 : 5.08 2002.05 7.82 0.00 0.00 63486.21 8159.10 116227.70 00:15:26.709 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:26.709 nvme2n1 : 5.11 1483.22 5.79 0.00 0.00 85438.13 12896.64 110332.09 00:15:26.709 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0x0 length 0xa0000 00:15:26.709 nvme3n1 : 5.08 1661.08 6.49 0.00 0.00 76381.40 13107.20 110332.09 00:15:26.709 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.709 Verification LBA range: start 0xa0000 length 0xa0000 00:15:26.709 nvme3n1 : 5.10 1355.95 5.30 0.00 0.00 93184.53 8474.94 108647.63 00:15:26.709 =================================================================================================================== 00:15:26.709 Total : 19085.54 74.55 0.00 0.00 79963.94 5027.06 128861.15 00:15:28.087 00:15:28.087 real 0m7.622s 00:15:28.087 user 0m10.301s 00:15:28.087 sys 0m3.039s 00:15:28.087 23:23:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.087 23:23:19 -- common/autotest_common.sh@10 -- # set +x 00:15:28.087 ************************************ 00:15:28.087 END TEST bdev_verify 00:15:28.087 ************************************ 00:15:28.087 23:23:19 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:28.087 23:23:19 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:28.087 23:23:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.087 23:23:19 -- common/autotest_common.sh@10 -- # set +x 00:15:28.087 ************************************ 00:15:28.087 START TEST bdev_verify_big_io 00:15:28.087 ************************************ 00:15:28.087 23:23:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:28.087 [2024-07-26 23:23:19.655218] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:28.087 [2024-07-26 23:23:19.655321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69920 ] 00:15:28.087 [2024-07-26 23:23:19.823700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.655 [2024-07-26 23:23:20.103252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.655 [2024-07-26 23:23:20.103278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.253 Running I/O for 5 seconds... 00:15:35.840 00:15:35.840 Latency(us) 00:15:35.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.841 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0x2000 00:15:35.841 nvme0n1 : 5.39 481.58 30.10 0.00 0.00 260226.92 29267.48 448066.21 00:15:35.841 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x2000 length 0x2000 00:15:35.841 nvme0n1 : 5.45 304.37 19.02 0.00 0.00 412355.85 66536.15 539027.02 00:15:35.841 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0x8000 00:15:35.841 nvme1n1 : 5.39 451.95 28.25 0.00 0.00 274616.03 28635.81 343629.73 00:15:35.841 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x8000 length 0x8000 00:15:35.841 nvme1n1 : 5.45 257.52 16.09 0.00 0.00 474205.18 36636.99 515444.59 00:15:35.841 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0x8000 00:15:35.841 nvme1n2 : 5.40 497.75 31.11 0.00 0.00 246462.37 29056.93 318362.83 00:15:35.841 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x8000 length 0x8000 00:15:35.841 nvme1n2 : 5.50 268.93 16.81 0.00 0.00 448068.95 18423.78 616512.15 00:15:35.841 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0x8000 00:15:35.841 nvme1n3 : 5.40 464.77 29.05 0.00 0.00 263283.11 33268.07 352052.02 00:15:35.841 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x8000 length 0x8000 00:15:35.841 nvme1n3 : 5.47 237.01 14.81 0.00 0.00 501898.37 45059.29 646832.42 00:15:35.841 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0xbd0b 00:15:35.841 nvme2n1 : 5.41 464.10 29.01 0.00 0.00 261912.27 30109.71 333522.97 00:15:35.841 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:35.841 nvme2n1 : 5.51 353.41 22.09 0.00 0.00 329680.35 13001.92 512075.67 00:15:35.841 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0x0 length 0xa000 00:15:35.841 nvme3n1 : 5.41 480.41 30.03 0.00 0.00 251322.28 18950.17 343629.73 00:15:35.841 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.841 Verification LBA range: start 0xa000 length 0xa000 00:15:35.841 nvme3n1 : 5.57 342.84 21.43 0.00 0.00 333296.17 664.57 471648.64 00:15:35.841 =================================================================================================================== 00:15:35.841 Total : 4604.64 287.79 0.00 0.00 316367.57 664.57 646832.42 00:15:36.409 00:15:36.409 real 0m8.489s 00:15:36.409 user 0m14.550s 00:15:36.409 sys 0m1.006s 00:15:36.409 23:23:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.409 23:23:28 -- common/autotest_common.sh@10 -- # set +x 00:15:36.409 ************************************ 00:15:36.409 END TEST bdev_verify_big_io 00:15:36.409 ************************************ 00:15:36.409 23:23:28 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.409 23:23:28 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:36.409 23:23:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:36.409 23:23:28 -- common/autotest_common.sh@10 -- # set +x 00:15:36.409 ************************************ 00:15:36.409 START TEST bdev_write_zeroes 00:15:36.409 ************************************ 00:15:36.409 23:23:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.668 [2024-07-26 23:23:28.243354] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:36.668 [2024-07-26 23:23:28.243470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70031 ] 00:15:36.668 [2024-07-26 23:23:28.415631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.927 [2024-07-26 23:23:28.677933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.496 Running I/O for 1 seconds... 00:15:38.876 00:15:38.876 Latency(us) 00:15:38.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.876 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme0n1 : 1.03 6933.42 27.08 0.00 0.00 18443.27 9790.92 29688.60 00:15:38.876 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme1n1 : 1.04 6916.18 27.02 0.00 0.00 18474.59 9790.92 29478.04 00:15:38.876 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme1n2 : 1.04 6898.77 26.95 0.00 0.00 18504.79 9843.56 29267.48 00:15:38.876 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme1n3 : 1.04 6881.38 26.88 0.00 0.00 18542.06 9896.20 29267.48 00:15:38.876 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme2n1 : 1.03 11109.54 43.40 0.00 0.00 11474.50 5369.21 22108.53 00:15:38.876 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.876 nvme3n1 : 1.04 6863.85 26.81 0.00 0.00 18447.00 3579.48 28004.14 00:15:38.876 =================================================================================================================== 00:15:38.876 Total : 45603.13 178.14 0.00 0.00 16784.91 3579.48 29688.60 00:15:40.256 00:15:40.256 real 0m3.523s 00:15:40.256 user 0m2.716s 00:15:40.256 sys 0m0.623s 00:15:40.256 23:23:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.256 23:23:31 -- common/autotest_common.sh@10 -- # set +x 00:15:40.256 ************************************ 00:15:40.256 END TEST bdev_write_zeroes 00:15:40.256 ************************************ 00:15:40.256 23:23:31 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.256 23:23:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:40.256 23:23:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.256 23:23:31 -- common/autotest_common.sh@10 -- # set +x 00:15:40.256 ************************************ 00:15:40.256 START TEST bdev_json_nonenclosed 00:15:40.256 ************************************ 00:15:40.256 23:23:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.256 [2024-07-26 23:23:31.823134] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.256 [2024-07-26 23:23:31.823233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70101 ] 00:15:40.256 [2024-07-26 23:23:31.992179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.515 [2024-07-26 23:23:32.252673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.515 [2024-07-26 23:23:32.252877] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:40.515 [2024-07-26 23:23:32.252903] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:41.083 00:15:41.083 real 0m0.987s 00:15:41.083 user 0m0.702s 00:15:41.083 sys 0m0.178s 00:15:41.083 23:23:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.083 ************************************ 00:15:41.083 END TEST bdev_json_nonenclosed 00:15:41.083 ************************************ 00:15:41.083 23:23:32 -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 23:23:32 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.083 23:23:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:41.083 23:23:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:41.083 23:23:32 -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 ************************************ 00:15:41.083 START TEST bdev_json_nonarray 00:15:41.083 ************************************ 00:15:41.083 23:23:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.342 [2024-07-26 23:23:32.899719] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:41.342 [2024-07-26 23:23:32.899828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70132 ] 00:15:41.342 [2024-07-26 23:23:33.069254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.601 [2024-07-26 23:23:33.330951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.601 [2024-07-26 23:23:33.331163] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:41.601 [2024-07-26 23:23:33.331187] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:42.170 00:15:42.170 real 0m0.988s 00:15:42.170 user 0m0.706s 00:15:42.170 sys 0m0.176s 00:15:42.170 23:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.170 ************************************ 00:15:42.170 END TEST bdev_json_nonarray 00:15:42.170 ************************************ 00:15:42.170 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:15:42.170 23:23:33 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:15:42.170 23:23:33 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:15:42.171 23:23:33 -- bdev/blockdev.sh@809 -- # cleanup 00:15:42.171 23:23:33 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:42.171 23:23:33 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.171 23:23:33 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:15:42.171 23:23:33 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:43.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.927 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.927 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.927 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:15:44.928 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:15:45.187 ************************************ 00:15:45.187 END TEST blockdev_xnvme 00:15:45.187 00:15:45.187 real 1m4.498s 00:15:45.187 user 1m40.906s 00:15:45.187 sys 0m33.476s 00:15:45.187 23:23:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.187 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:15:45.187 ************************************ 00:15:45.187 23:23:36 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:45.187 23:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:45.187 23:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.187 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:15:45.187 ************************************ 00:15:45.187 START TEST ublk 00:15:45.187 ************************************ 00:15:45.187 23:23:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:45.187 * Looking for test storage... 00:15:45.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:45.187 23:23:36 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:45.187 23:23:36 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:45.187 23:23:36 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:45.187 23:23:36 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:45.187 23:23:36 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:45.187 23:23:36 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:45.187 23:23:36 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:45.187 23:23:36 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:45.188 23:23:36 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:45.188 23:23:36 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:45.188 23:23:36 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:45.188 23:23:36 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:45.188 23:23:36 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:45.188 23:23:36 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:45.188 23:23:36 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:45.188 23:23:36 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:45.188 23:23:36 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:45.188 23:23:36 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:45.188 23:23:36 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:45.188 23:23:36 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:45.188 23:23:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:45.188 23:23:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:45.188 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 ************************************ 00:15:45.448 START TEST test_save_ublk_config 00:15:45.448 ************************************ 00:15:45.448 23:23:36 -- common/autotest_common.sh@1104 -- # test_save_config 00:15:45.448 23:23:36 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:45.448 23:23:36 -- ublk/ublk.sh@103 -- # tgtpid=70424 00:15:45.448 23:23:36 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:45.448 23:23:36 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:45.448 23:23:36 -- ublk/ublk.sh@106 -- # waitforlisten 70424 00:15:45.448 23:23:36 -- common/autotest_common.sh@819 -- # '[' -z 70424 ']' 00:15:45.448 23:23:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.448 23:23:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.448 23:23:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.448 23:23:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.448 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:15:45.448 [2024-07-26 23:23:37.050879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:45.448 [2024-07-26 23:23:37.050995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70424 ] 00:15:45.707 [2024-07-26 23:23:37.223643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.967 [2024-07-26 23:23:37.504150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.967 [2024-07-26 23:23:37.504350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.875 23:23:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.875 23:23:39 -- common/autotest_common.sh@852 -- # return 0 00:15:47.875 23:23:39 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:47.875 23:23:39 -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:47.875 23:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.875 23:23:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.875 [2024-07-26 23:23:39.151370] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:47.875 malloc0 00:15:47.875 [2024-07-26 23:23:39.254124] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:47.875 [2024-07-26 23:23:39.254221] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:47.875 [2024-07-26 23:23:39.254231] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:47.875 [2024-07-26 23:23:39.254242] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:47.875 [2024-07-26 23:23:39.263086] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:47.875 [2024-07-26 23:23:39.263118] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:47.875 [2024-07-26 23:23:39.269999] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:47.875 [2024-07-26 23:23:39.270117] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:47.875 [2024-07-26 23:23:39.286985] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:47.875 0 00:15:47.875 23:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.875 23:23:39 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:47.875 23:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.875 23:23:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.875 23:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.875 23:23:39 -- ublk/ublk.sh@115 -- # config='{ 00:15:47.875 "subsystems": [ 00:15:47.875 { 00:15:47.875 "subsystem": "iobuf", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "iobuf_set_options", 00:15:47.875 "params": { 00:15:47.875 "small_pool_count": 8192, 00:15:47.875 "large_pool_count": 1024, 00:15:47.875 "small_bufsize": 8192, 00:15:47.875 "large_bufsize": 135168 00:15:47.875 } 00:15:47.875 } 00:15:47.875 ] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "sock", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "sock_impl_set_options", 00:15:47.875 "params": { 00:15:47.875 "impl_name": "posix", 00:15:47.875 "recv_buf_size": 2097152, 00:15:47.875 "send_buf_size": 2097152, 00:15:47.875 "enable_recv_pipe": true, 00:15:47.875 "enable_quickack": false, 00:15:47.875 "enable_placement_id": 0, 00:15:47.875 "enable_zerocopy_send_server": true, 00:15:47.875 "enable_zerocopy_send_client": false, 00:15:47.875 "zerocopy_threshold": 0, 00:15:47.875 "tls_version": 0, 00:15:47.875 "enable_ktls": false 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "sock_impl_set_options", 00:15:47.875 "params": { 00:15:47.875 "impl_name": "ssl", 00:15:47.875 "recv_buf_size": 4096, 00:15:47.875 "send_buf_size": 4096, 00:15:47.875 "enable_recv_pipe": true, 00:15:47.875 "enable_quickack": false, 00:15:47.875 "enable_placement_id": 0, 00:15:47.875 "enable_zerocopy_send_server": true, 00:15:47.875 "enable_zerocopy_send_client": false, 00:15:47.875 "zerocopy_threshold": 0, 00:15:47.875 "tls_version": 0, 00:15:47.875 "enable_ktls": false 00:15:47.875 } 00:15:47.875 } 00:15:47.875 ] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "vmd", 00:15:47.875 "config": [] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "accel", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "accel_set_options", 00:15:47.875 "params": { 00:15:47.875 "small_cache_size": 128, 00:15:47.875 "large_cache_size": 16, 00:15:47.875 "task_count": 2048, 00:15:47.875 "sequence_count": 2048, 00:15:47.875 "buf_count": 2048 00:15:47.875 } 00:15:47.875 } 00:15:47.875 ] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "bdev", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "bdev_set_options", 00:15:47.875 "params": { 00:15:47.875 "bdev_io_pool_size": 65535, 00:15:47.875 "bdev_io_cache_size": 256, 00:15:47.875 "bdev_auto_examine": true, 00:15:47.875 "iobuf_small_cache_size": 128, 00:15:47.875 "iobuf_large_cache_size": 16 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_raid_set_options", 00:15:47.875 "params": { 00:15:47.875 "process_window_size_kb": 1024 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_iscsi_set_options", 00:15:47.875 "params": { 00:15:47.875 "timeout_sec": 30 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_nvme_set_options", 00:15:47.875 "params": { 00:15:47.875 "action_on_timeout": "none", 00:15:47.875 "timeout_us": 0, 00:15:47.875 "timeout_admin_us": 0, 00:15:47.875 "keep_alive_timeout_ms": 10000, 00:15:47.875 "transport_retry_count": 4, 00:15:47.875 "arbitration_burst": 0, 00:15:47.875 "low_priority_weight": 0, 00:15:47.875 "medium_priority_weight": 0, 00:15:47.875 "high_priority_weight": 0, 00:15:47.875 "nvme_adminq_poll_period_us": 10000, 00:15:47.875 "nvme_ioq_poll_period_us": 0, 00:15:47.875 "io_queue_requests": 0, 00:15:47.875 "delay_cmd_submit": true, 00:15:47.875 "bdev_retry_count": 3, 00:15:47.875 "transport_ack_timeout": 0, 00:15:47.875 "ctrlr_loss_timeout_sec": 0, 00:15:47.875 "reconnect_delay_sec": 0, 00:15:47.875 "fast_io_fail_timeout_sec": 0, 00:15:47.875 "generate_uuids": false, 00:15:47.875 "transport_tos": 0, 00:15:47.875 "io_path_stat": false, 00:15:47.875 "allow_accel_sequence": false 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_nvme_set_hotplug", 00:15:47.875 "params": { 00:15:47.875 "period_us": 100000, 00:15:47.875 "enable": false 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_malloc_create", 00:15:47.875 "params": { 00:15:47.875 "name": "malloc0", 00:15:47.875 "num_blocks": 8192, 00:15:47.875 "block_size": 4096, 00:15:47.875 "physical_block_size": 4096, 00:15:47.875 "uuid": "816a80d1-50af-420a-91a9-15d6f40f6303", 00:15:47.875 "optimal_io_boundary": 0 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "bdev_wait_for_examine" 00:15:47.875 } 00:15:47.875 ] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "scsi", 00:15:47.875 "config": null 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "scheduler", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "framework_set_scheduler", 00:15:47.875 "params": { 00:15:47.875 "name": "static" 00:15:47.875 } 00:15:47.875 } 00:15:47.875 ] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "vhost_scsi", 00:15:47.875 "config": [] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "vhost_blk", 00:15:47.875 "config": [] 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "subsystem": "ublk", 00:15:47.875 "config": [ 00:15:47.875 { 00:15:47.875 "method": "ublk_create_target", 00:15:47.875 "params": { 00:15:47.875 "cpumask": "1" 00:15:47.875 } 00:15:47.875 }, 00:15:47.875 { 00:15:47.875 "method": "ublk_start_disk", 00:15:47.875 "params": { 00:15:47.876 "bdev_name": "malloc0", 00:15:47.876 "ublk_id": 0, 00:15:47.876 "num_queues": 1, 00:15:47.876 "queue_depth": 128 00:15:47.876 } 00:15:47.876 } 00:15:47.876 ] 00:15:47.876 }, 00:15:47.876 { 00:15:47.876 "subsystem": "nbd", 00:15:47.876 "config": [] 00:15:47.876 }, 00:15:47.876 { 00:15:47.876 "subsystem": "nvmf", 00:15:47.876 "config": [ 00:15:47.876 { 00:15:47.876 "method": "nvmf_set_config", 00:15:47.876 "params": { 00:15:47.876 "discovery_filter": "match_any", 00:15:47.876 "admin_cmd_passthru": { 00:15:47.876 "identify_ctrlr": false 00:15:47.876 } 00:15:47.876 } 00:15:47.876 }, 00:15:47.876 { 00:15:47.876 "method": "nvmf_set_max_subsystems", 00:15:47.876 "params": { 00:15:47.876 "max_subsystems": 1024 00:15:47.876 } 00:15:47.876 }, 00:15:47.876 { 00:15:47.876 "method": "nvmf_set_crdt", 00:15:47.876 "params": { 00:15:47.876 "crdt1": 0, 00:15:47.876 "crdt2": 0, 00:15:47.876 "crdt3": 0 00:15:47.876 } 00:15:47.876 } 00:15:47.876 ] 00:15:47.876 }, 00:15:47.876 { 00:15:47.876 "subsystem": "iscsi", 00:15:47.876 "config": [ 00:15:47.876 { 00:15:47.876 "method": "iscsi_set_options", 00:15:47.876 "params": { 00:15:47.876 "node_base": "iqn.2016-06.io.spdk", 00:15:47.876 "max_sessions": 128, 00:15:47.876 "max_connections_per_session": 2, 00:15:47.876 "max_queue_depth": 64, 00:15:47.876 "default_time2wait": 2, 00:15:47.876 "default_time2retain": 20, 00:15:47.876 "first_burst_length": 8192, 00:15:47.876 "immediate_data": true, 00:15:47.876 "allow_duplicated_isid": false, 00:15:47.876 "error_recovery_level": 0, 00:15:47.876 "nop_timeout": 60, 00:15:47.876 "nop_in_interval": 30, 00:15:47.876 "disable_chap": false, 00:15:47.876 "require_chap": false, 00:15:47.876 "mutual_chap": false, 00:15:47.876 "chap_group": 0, 00:15:47.876 "max_large_datain_per_connection": 64, 00:15:47.876 "max_r2t_per_connection": 4, 00:15:47.876 "pdu_pool_size": 36864, 00:15:47.876 "immediate_data_pool_size": 16384, 00:15:47.876 "data_out_pool_size": 2048 00:15:47.876 } 00:15:47.876 } 00:15:47.876 ] 00:15:47.876 } 00:15:47.876 ] 00:15:47.876 }' 00:15:47.876 23:23:39 -- ublk/ublk.sh@116 -- # killprocess 70424 00:15:47.876 23:23:39 -- common/autotest_common.sh@926 -- # '[' -z 70424 ']' 00:15:47.876 23:23:39 -- common/autotest_common.sh@930 -- # kill -0 70424 00:15:47.876 23:23:39 -- common/autotest_common.sh@931 -- # uname 00:15:47.876 23:23:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.876 23:23:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70424 00:15:47.876 23:23:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.876 23:23:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.876 killing process with pid 70424 00:15:47.876 23:23:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70424' 00:15:47.876 23:23:39 -- common/autotest_common.sh@945 -- # kill 70424 00:15:47.876 23:23:39 -- common/autotest_common.sh@950 -- # wait 70424 00:15:49.783 [2024-07-26 23:23:41.089569] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.783 [2024-07-26 23:23:41.127061] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.783 [2024-07-26 23:23:41.127238] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.783 [2024-07-26 23:23:41.135037] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.783 [2024-07-26 23:23:41.135100] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:49.783 [2024-07-26 23:23:41.135109] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:49.783 [2024-07-26 23:23:41.135146] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:49.783 [2024-07-26 23:23:41.135308] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:51.164 23:23:42 -- ublk/ublk.sh@119 -- # tgtpid=70505 00:15:51.164 23:23:42 -- ublk/ublk.sh@121 -- # waitforlisten 70505 00:15:51.164 23:23:42 -- common/autotest_common.sh@819 -- # '[' -z 70505 ']' 00:15:51.164 23:23:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.164 23:23:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:51.164 23:23:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.164 23:23:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:51.164 23:23:42 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:51.164 23:23:42 -- common/autotest_common.sh@10 -- # set +x 00:15:51.164 23:23:42 -- ublk/ublk.sh@118 -- # echo '{ 00:15:51.164 "subsystems": [ 00:15:51.164 { 00:15:51.164 "subsystem": "iobuf", 00:15:51.164 "config": [ 00:15:51.164 { 00:15:51.164 "method": "iobuf_set_options", 00:15:51.164 "params": { 00:15:51.164 "small_pool_count": 8192, 00:15:51.164 "large_pool_count": 1024, 00:15:51.164 "small_bufsize": 8192, 00:15:51.164 "large_bufsize": 135168 00:15:51.164 } 00:15:51.164 } 00:15:51.164 ] 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "subsystem": "sock", 00:15:51.164 "config": [ 00:15:51.164 { 00:15:51.164 "method": "sock_impl_set_options", 00:15:51.164 "params": { 00:15:51.164 "impl_name": "posix", 00:15:51.164 "recv_buf_size": 2097152, 00:15:51.164 "send_buf_size": 2097152, 00:15:51.164 "enable_recv_pipe": true, 00:15:51.164 "enable_quickack": false, 00:15:51.164 "enable_placement_id": 0, 00:15:51.164 "enable_zerocopy_send_server": true, 00:15:51.164 "enable_zerocopy_send_client": false, 00:15:51.164 "zerocopy_threshold": 0, 00:15:51.164 "tls_version": 0, 00:15:51.164 "enable_ktls": false 00:15:51.164 } 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "method": "sock_impl_set_options", 00:15:51.164 "params": { 00:15:51.164 "impl_name": "ssl", 00:15:51.164 "recv_buf_size": 4096, 00:15:51.164 "send_buf_size": 4096, 00:15:51.164 "enable_recv_pipe": true, 00:15:51.164 "enable_quickack": false, 00:15:51.164 "enable_placement_id": 0, 00:15:51.164 "enable_zerocopy_send_server": true, 00:15:51.164 "enable_zerocopy_send_client": false, 00:15:51.164 "zerocopy_threshold": 0, 00:15:51.164 "tls_version": 0, 00:15:51.164 "enable_ktls": false 00:15:51.164 } 00:15:51.164 } 00:15:51.164 ] 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "subsystem": "vmd", 00:15:51.164 "config": [] 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "subsystem": "accel", 00:15:51.164 "config": [ 00:15:51.164 { 00:15:51.164 "method": "accel_set_options", 00:15:51.164 "params": { 00:15:51.164 "small_cache_size": 128, 00:15:51.164 "large_cache_size": 16, 00:15:51.164 "task_count": 2048, 00:15:51.164 "sequence_count": 2048, 00:15:51.164 "buf_count": 2048 00:15:51.164 } 00:15:51.164 } 00:15:51.164 ] 00:15:51.164 }, 00:15:51.164 { 00:15:51.164 "subsystem": "bdev", 00:15:51.164 "config": [ 00:15:51.164 { 00:15:51.165 "method": "bdev_set_options", 00:15:51.165 "params": { 00:15:51.165 "bdev_io_pool_size": 65535, 00:15:51.165 "bdev_io_cache_size": 256, 00:15:51.165 "bdev_auto_examine": true, 00:15:51.165 "iobuf_small_cache_size": 128, 00:15:51.165 "iobuf_large_cache_size": 16 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_raid_set_options", 00:15:51.165 "params": { 00:15:51.165 "process_window_size_kb": 1024 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_iscsi_set_options", 00:15:51.165 "params": { 00:15:51.165 "timeout_sec": 30 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_nvme_set_options", 00:15:51.165 "params": { 00:15:51.165 "action_on_timeout": "none", 00:15:51.165 "timeout_us": 0, 00:15:51.165 "timeout_admin_us": 0, 00:15:51.165 "keep_alive_timeout_ms": 10000, 00:15:51.165 "transport_retry_count": 4, 00:15:51.165 "arbitration_burst": 0, 00:15:51.165 "low_priority_weight": 0, 00:15:51.165 "medium_priority_weight": 0, 00:15:51.165 "high_priority_weight": 0, 00:15:51.165 "nvme_adminq_poll_period_us": 10000, 00:15:51.165 "nvme_ioq_poll_period_us": 0, 00:15:51.165 "io_queue_requests": 0, 00:15:51.165 "delay_cmd_submit": true, 00:15:51.165 "bdev_retry_count": 3, 00:15:51.165 "transport_ack_timeout": 0, 00:15:51.165 "ctrlr_loss_timeout_sec": 0, 00:15:51.165 "reconnect_delay_sec": 0, 00:15:51.165 "fast_io_fail_timeout_sec": 0, 00:15:51.165 "generate_uuids": false, 00:15:51.165 "transport_tos": 0, 00:15:51.165 "io_path_stat": false, 00:15:51.165 "allow_accel_sequence": false 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_nvme_set_hotplug", 00:15:51.165 "params": { 00:15:51.165 "period_us": 100000, 00:15:51.165 "enable": false 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_malloc_create", 00:15:51.165 "params": { 00:15:51.165 "name": "malloc0", 00:15:51.165 "num_blocks": 8192, 00:15:51.165 "block_size": 4096, 00:15:51.165 "physical_block_size": 4096, 00:15:51.165 "uuid": "816a80d1-50af-420a-91a9-15d6f40f6303", 00:15:51.165 "optimal_io_boundary": 0 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "bdev_wait_for_examine" 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "scsi", 00:15:51.165 "config": null 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "scheduler", 00:15:51.165 "config": [ 00:15:51.165 { 00:15:51.165 "method": "framework_set_scheduler", 00:15:51.165 "params": { 00:15:51.165 "name": "static" 00:15:51.165 } 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "vhost_scsi", 00:15:51.165 "config": [] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "vhost_blk", 00:15:51.165 "config": [] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "ublk", 00:15:51.165 "config": [ 00:15:51.165 { 00:15:51.165 "method": "ublk_create_target", 00:15:51.165 "params": { 00:15:51.165 "cpumask": "1" 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "ublk_start_disk", 00:15:51.165 "params": { 00:15:51.165 "bdev_name": "malloc0", 00:15:51.165 "ublk_id": 0, 00:15:51.165 "num_queues": 1, 00:15:51.165 "queue_depth": 128 00:15:51.165 } 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "nbd", 00:15:51.165 "config": [] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "nvmf", 00:15:51.165 "config": [ 00:15:51.165 { 00:15:51.165 "method": "nvmf_set_config", 00:15:51.165 "params": { 00:15:51.165 "discovery_filter": "match_any", 00:15:51.165 "admin_cmd_passthru": { 00:15:51.165 "identify_ctrlr": false 00:15:51.165 } 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "nvmf_set_max_subsystems", 00:15:51.165 "params": { 00:15:51.165 "max_subsystems": 1024 00:15:51.165 } 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "method": "nvmf_set_crdt", 00:15:51.165 "params": { 00:15:51.165 "crdt1": 0, 00:15:51.165 "crdt2": 0, 00:15:51.165 "crdt3": 0 00:15:51.165 } 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 }, 00:15:51.165 { 00:15:51.165 "subsystem": "iscsi", 00:15:51.165 "config": [ 00:15:51.165 { 00:15:51.165 "method": "iscsi_set_options", 00:15:51.165 "params": { 00:15:51.165 "node_base": "iqn.2016-06.io.spdk", 00:15:51.165 "max_sessions": 128, 00:15:51.165 "max_connections_per_session": 2, 00:15:51.165 "max_queue_depth": 64, 00:15:51.165 "default_time2wait": 2, 00:15:51.165 "default_time2retain": 20, 00:15:51.165 "first_burst_length": 8192, 00:15:51.165 "immediate_data": true, 00:15:51.165 "allow_duplicated_isid": false, 00:15:51.165 "error_recovery_level": 0, 00:15:51.165 "nop_timeout": 60, 00:15:51.165 "nop_in_interval": 30, 00:15:51.165 "disable_chap": false, 00:15:51.165 "require_chap": false, 00:15:51.165 "mutual_chap": false, 00:15:51.165 "chap_group": 0, 00:15:51.165 "max_large_datain_per_connection": 64, 00:15:51.165 "max_r2t_per_connection": 4, 00:15:51.165 "pdu_pool_size": 36864, 00:15:51.165 "immediate_data_pool_size": 16384, 00:15:51.165 "data_out_pool_size": 2048 00:15:51.165 } 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 } 00:15:51.165 ] 00:15:51.165 }' 00:15:51.165 [2024-07-26 23:23:42.703263] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:51.165 [2024-07-26 23:23:42.703380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70505 ] 00:15:51.165 [2024-07-26 23:23:42.873557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.425 [2024-07-26 23:23:43.138175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.425 [2024-07-26 23:23:43.138372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.810 [2024-07-26 23:23:44.323264] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:52.810 [2024-07-26 23:23:44.330094] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:52.810 [2024-07-26 23:23:44.330182] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:52.810 [2024-07-26 23:23:44.330192] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:52.810 [2024-07-26 23:23:44.330201] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:52.810 [2024-07-26 23:23:44.339076] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:52.810 [2024-07-26 23:23:44.339101] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:52.810 [2024-07-26 23:23:44.345998] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:52.810 [2024-07-26 23:23:44.346104] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:52.810 [2024-07-26 23:23:44.362991] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:53.377 23:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.377 23:23:44 -- common/autotest_common.sh@852 -- # return 0 00:15:53.377 23:23:44 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:53.377 23:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.377 23:23:44 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:53.377 23:23:44 -- common/autotest_common.sh@10 -- # set +x 00:15:53.377 23:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.377 23:23:44 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:53.377 23:23:44 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:53.377 23:23:44 -- ublk/ublk.sh@125 -- # killprocess 70505 00:15:53.377 23:23:44 -- common/autotest_common.sh@926 -- # '[' -z 70505 ']' 00:15:53.377 23:23:44 -- common/autotest_common.sh@930 -- # kill -0 70505 00:15:53.377 23:23:44 -- common/autotest_common.sh@931 -- # uname 00:15:53.377 23:23:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.377 23:23:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70505 00:15:53.377 killing process with pid 70505 00:15:53.377 23:23:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.377 23:23:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.377 23:23:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70505' 00:15:53.377 23:23:44 -- common/autotest_common.sh@945 -- # kill 70505 00:15:53.377 23:23:44 -- common/autotest_common.sh@950 -- # wait 70505 00:15:54.757 [2024-07-26 23:23:46.280259] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.757 [2024-07-26 23:23:46.316075] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.757 [2024-07-26 23:23:46.316230] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.757 [2024-07-26 23:23:46.322999] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.757 [2024-07-26 23:23:46.323068] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:54.757 [2024-07-26 23:23:46.323077] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:54.757 [2024-07-26 23:23:46.323112] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.757 [2024-07-26 23:23:46.323294] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:56.138 23:23:47 -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:56.138 00:15:56.138 real 0m10.846s 00:15:56.138 user 0m9.595s 00:15:56.138 sys 0m2.513s 00:15:56.138 23:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.138 23:23:47 -- common/autotest_common.sh@10 -- # set +x 00:15:56.138 ************************************ 00:15:56.138 END TEST test_save_ublk_config 00:15:56.138 ************************************ 00:15:56.138 23:23:47 -- ublk/ublk.sh@139 -- # spdk_pid=70600 00:15:56.138 23:23:47 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.138 23:23:47 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.138 23:23:47 -- ublk/ublk.sh@141 -- # waitforlisten 70600 00:15:56.138 23:23:47 -- common/autotest_common.sh@819 -- # '[' -z 70600 ']' 00:15:56.138 23:23:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.138 23:23:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.138 23:23:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.138 23:23:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.138 23:23:47 -- common/autotest_common.sh@10 -- # set +x 00:15:56.397 [2024-07-26 23:23:47.957128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:56.397 [2024-07-26 23:23:47.957229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70600 ] 00:15:56.397 [2024-07-26 23:23:48.129053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.656 [2024-07-26 23:23:48.386664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.656 [2024-07-26 23:23:48.386938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.656 [2024-07-26 23:23:48.387028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.562 23:23:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.562 23:23:50 -- common/autotest_common.sh@852 -- # return 0 00:15:58.562 23:23:50 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:58.562 23:23:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:58.562 23:23:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.562 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:15:58.562 ************************************ 00:15:58.562 START TEST test_create_ublk 00:15:58.562 ************************************ 00:15:58.562 23:23:50 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:15:58.562 23:23:50 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:58.562 23:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.562 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:15:58.562 [2024-07-26 23:23:50.076268] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:58.562 23:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.562 23:23:50 -- ublk/ublk.sh@33 -- # ublk_target= 00:15:58.562 23:23:50 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:58.562 23:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.562 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:15:58.821 23:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.821 23:23:50 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:58.821 23:23:50 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:58.821 23:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.821 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:15:58.821 [2024-07-26 23:23:50.443151] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:58.821 [2024-07-26 23:23:50.443624] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:58.821 [2024-07-26 23:23:50.443644] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:58.821 [2024-07-26 23:23:50.443656] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:58.821 [2024-07-26 23:23:50.451380] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:58.821 [2024-07-26 23:23:50.451411] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:58.821 [2024-07-26 23:23:50.458010] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:58.821 [2024-07-26 23:23:50.466180] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:58.821 [2024-07-26 23:23:50.488004] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:58.821 23:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.821 23:23:50 -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:58.821 23:23:50 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:58.821 23:23:50 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:58.821 23:23:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.821 23:23:50 -- common/autotest_common.sh@10 -- # set +x 00:15:58.821 23:23:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.821 23:23:50 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:58.821 { 00:15:58.821 "ublk_device": "/dev/ublkb0", 00:15:58.821 "id": 0, 00:15:58.821 "queue_depth": 512, 00:15:58.821 "num_queues": 4, 00:15:58.821 "bdev_name": "Malloc0" 00:15:58.821 } 00:15:58.821 ]' 00:15:58.821 23:23:50 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:58.821 23:23:50 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:58.821 23:23:50 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:59.080 23:23:50 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:59.080 23:23:50 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:59.080 23:23:50 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:59.080 23:23:50 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:59.080 23:23:50 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:59.080 23:23:50 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:59.080 23:23:50 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:59.080 23:23:50 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:59.080 23:23:50 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:59.080 23:23:50 -- lvol/common.sh@41 -- # local offset=0 00:15:59.080 23:23:50 -- lvol/common.sh@42 -- # local size=134217728 00:15:59.080 23:23:50 -- lvol/common.sh@43 -- # local rw=write 00:15:59.080 23:23:50 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:59.080 23:23:50 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:59.080 23:23:50 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:59.080 23:23:50 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:59.080 23:23:50 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:59.080 23:23:50 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:59.080 23:23:50 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:59.369 fio: verification read phase will never start because write phase uses all of runtime 00:15:59.369 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:59.369 fio-3.35 00:15:59.369 Starting 1 process 00:16:09.351 00:16:09.351 fio_test: (groupid=0, jobs=1): err= 0: pid=70668: Fri Jul 26 23:24:00 2024 00:16:09.351 write: IOPS=6485, BW=25.3MiB/s (26.6MB/s)(253MiB/10001msec); 0 zone resets 00:16:09.351 clat (usec): min=35, max=4109, avg=153.39, stdev=104.40 00:16:09.351 lat (usec): min=36, max=4109, avg=153.82, stdev=104.41 00:16:09.351 clat percentiles (usec): 00:16:09.351 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 106], 20.00th=[ 147], 00:16:09.351 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:16:09.351 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 182], 00:16:09.351 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 2114], 99.95th=[ 2966], 00:16:09.351 | 99.99th=[ 3785] 00:16:09.351 bw ( KiB/s): min=23728, max=57596, per=100.00%, avg=26067.58, stdev=7650.63, samples=19 00:16:09.351 iops : min= 5932, max=14399, avg=6516.89, stdev=1912.66, samples=19 00:16:09.351 lat (usec) : 50=8.86%, 100=0.86%, 250=90.10%, 500=0.01%, 750=0.01% 00:16:09.351 lat (usec) : 1000=0.01% 00:16:09.351 lat (msec) : 2=0.05%, 4=0.10%, 10=0.01% 00:16:09.351 cpu : usr=1.19%, sys=3.91%, ctx=64864, majf=0, minf=796 00:16:09.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:09.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.351 issued rwts: total=0,64862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:09.351 00:16:09.351 Run status group 0 (all jobs): 00:16:09.351 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=253MiB (266MB), run=10001-10001msec 00:16:09.351 00:16:09.351 Disk stats (read/write): 00:16:09.351 ublkb0: ios=0/64230, merge=0/0, ticks=0/9409, in_queue=9409, util=99.13% 00:16:09.351 23:24:00 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:09.351 23:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.351 23:24:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.351 [2024-07-26 23:24:00.992262] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:09.351 [2024-07-26 23:24:01.025512] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:09.351 [2024-07-26 23:24:01.031461] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:09.351 [2024-07-26 23:24:01.041033] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:09.351 [2024-07-26 23:24:01.041417] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:09.351 [2024-07-26 23:24:01.041435] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:09.351 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.351 23:24:01 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:09.351 23:24:01 -- common/autotest_common.sh@640 -- # local es=0 00:16:09.351 23:24:01 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:09.351 23:24:01 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:09.351 23:24:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.351 23:24:01 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:09.351 23:24:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:09.351 23:24:01 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:16:09.351 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.351 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.351 [2024-07-26 23:24:01.057122] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:09.351 request: 00:16:09.351 { 00:16:09.351 "ublk_id": 0, 00:16:09.351 "method": "ublk_stop_disk", 00:16:09.351 "req_id": 1 00:16:09.351 } 00:16:09.351 Got JSON-RPC error response 00:16:09.351 response: 00:16:09.351 { 00:16:09.351 "code": -19, 00:16:09.351 "message": "No such device" 00:16:09.351 } 00:16:09.351 23:24:01 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:09.351 23:24:01 -- common/autotest_common.sh@643 -- # es=1 00:16:09.351 23:24:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:09.351 23:24:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:09.351 23:24:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:09.351 23:24:01 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:09.351 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.351 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.351 [2024-07-26 23:24:01.080112] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.351 [2024-07-26 23:24:01.088005] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:09.351 [2024-07-26 23:24:01.088041] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:09.351 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.351 23:24:01 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:09.351 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.351 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.920 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.920 23:24:01 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:09.920 23:24:01 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:09.920 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.920 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.920 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.920 23:24:01 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:09.920 23:24:01 -- lvol/common.sh@26 -- # jq length 00:16:09.920 23:24:01 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:09.920 23:24:01 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:09.920 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.920 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.920 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.920 23:24:01 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:09.921 23:24:01 -- lvol/common.sh@28 -- # jq length 00:16:09.921 ************************************ 00:16:09.921 END TEST test_create_ublk 00:16:09.921 ************************************ 00:16:09.921 23:24:01 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:09.921 00:16:09.921 real 0m11.516s 00:16:09.921 user 0m0.488s 00:16:09.921 sys 0m0.543s 00:16:09.921 23:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.921 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.921 23:24:01 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:09.921 23:24:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:09.921 23:24:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.921 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.921 ************************************ 00:16:09.921 START TEST test_create_multi_ublk 00:16:09.921 ************************************ 00:16:09.921 23:24:01 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:16:09.921 23:24:01 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:09.921 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.921 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:09.921 [2024-07-26 23:24:01.662080] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.921 23:24:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.921 23:24:01 -- ublk/ublk.sh@62 -- # ublk_target= 00:16:09.921 23:24:01 -- ublk/ublk.sh@64 -- # seq 0 3 00:16:09.921 23:24:01 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.180 23:24:01 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:10.180 23:24:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.180 23:24:01 -- common/autotest_common.sh@10 -- # set +x 00:16:10.438 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.438 23:24:02 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:10.438 23:24:02 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:10.438 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.438 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:10.438 [2024-07-26 23:24:02.030185] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:10.438 [2024-07-26 23:24:02.030679] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:10.438 [2024-07-26 23:24:02.030695] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:10.438 [2024-07-26 23:24:02.030707] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.438 [2024-07-26 23:24:02.038021] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.438 [2024-07-26 23:24:02.038053] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.438 [2024-07-26 23:24:02.046021] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.438 [2024-07-26 23:24:02.046636] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:10.438 [2024-07-26 23:24:02.060074] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.438 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.438 23:24:02 -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:10.438 23:24:02 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.438 23:24:02 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:10.438 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.438 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:10.696 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.696 23:24:02 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:10.696 23:24:02 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:10.696 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.696 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:10.696 [2024-07-26 23:24:02.440128] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:10.696 [2024-07-26 23:24:02.440649] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:10.696 [2024-07-26 23:24:02.440682] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:10.696 [2024-07-26 23:24:02.440691] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.696 [2024-07-26 23:24:02.448036] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.696 [2024-07-26 23:24:02.448059] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.955 [2024-07-26 23:24:02.456051] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.955 [2024-07-26 23:24:02.456679] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:10.955 [2024-07-26 23:24:02.472018] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.955 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.955 23:24:02 -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:10.955 23:24:02 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.955 23:24:02 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:10.955 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.955 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:11.214 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.214 23:24:02 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:11.214 23:24:02 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:11.214 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.214 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:11.214 [2024-07-26 23:24:02.856206] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:11.214 [2024-07-26 23:24:02.856714] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:11.215 [2024-07-26 23:24:02.856732] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:11.215 [2024-07-26 23:24:02.856748] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:11.215 [2024-07-26 23:24:02.864017] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:11.215 [2024-07-26 23:24:02.864046] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:11.215 [2024-07-26 23:24:02.872024] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:11.215 [2024-07-26 23:24:02.872659] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:11.215 [2024-07-26 23:24:02.884003] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:11.215 23:24:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.215 23:24:02 -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:11.215 23:24:02 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.215 23:24:02 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:11.215 23:24:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.215 23:24:02 -- common/autotest_common.sh@10 -- # set +x 00:16:11.783 23:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:11.783 23:24:03 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:11.783 23:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.783 23:24:03 -- common/autotest_common.sh@10 -- # set +x 00:16:11.783 [2024-07-26 23:24:03.262161] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:11.783 [2024-07-26 23:24:03.262630] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:11.783 [2024-07-26 23:24:03.262650] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:11.783 [2024-07-26 23:24:03.262659] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:11.783 [2024-07-26 23:24:03.274045] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:11.783 [2024-07-26 23:24:03.274070] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:11.783 [2024-07-26 23:24:03.282030] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:11.783 [2024-07-26 23:24:03.282652] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:11.783 [2024-07-26 23:24:03.299042] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:11.783 23:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:11.783 23:24:03 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:11.783 23:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.783 23:24:03 -- common/autotest_common.sh@10 -- # set +x 00:16:11.783 23:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:11.783 { 00:16:11.783 "ublk_device": "/dev/ublkb0", 00:16:11.783 "id": 0, 00:16:11.783 "queue_depth": 512, 00:16:11.783 "num_queues": 4, 00:16:11.783 "bdev_name": "Malloc0" 00:16:11.783 }, 00:16:11.783 { 00:16:11.783 "ublk_device": "/dev/ublkb1", 00:16:11.783 "id": 1, 00:16:11.783 "queue_depth": 512, 00:16:11.783 "num_queues": 4, 00:16:11.783 "bdev_name": "Malloc1" 00:16:11.783 }, 00:16:11.783 { 00:16:11.783 "ublk_device": "/dev/ublkb2", 00:16:11.783 "id": 2, 00:16:11.783 "queue_depth": 512, 00:16:11.783 "num_queues": 4, 00:16:11.783 "bdev_name": "Malloc2" 00:16:11.783 }, 00:16:11.783 { 00:16:11.783 "ublk_device": "/dev/ublkb3", 00:16:11.783 "id": 3, 00:16:11.783 "queue_depth": 512, 00:16:11.783 "num_queues": 4, 00:16:11.783 "bdev_name": "Malloc3" 00:16:11.783 } 00:16:11.783 ]' 00:16:11.783 23:24:03 -- ublk/ublk.sh@72 -- # seq 0 3 00:16:11.783 23:24:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.783 23:24:03 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:11.783 23:24:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:11.783 23:24:03 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:11.783 23:24:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:11.783 23:24:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.783 23:24:03 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:12.041 23:24:03 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.041 23:24:03 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:12.041 23:24:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:12.041 23:24:03 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:12.041 23:24:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:12.041 23:24:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:12.041 23:24:03 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:12.041 23:24:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.041 23:24:03 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:12.300 23:24:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:12.300 23:24:03 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:12.300 23:24:03 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:12.300 23:24:03 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:12.300 23:24:03 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:12.300 23:24:03 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:12.300 23:24:03 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:12.300 23:24:03 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:12.300 23:24:03 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:12.300 23:24:03 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.300 23:24:03 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:12.300 23:24:03 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:12.300 23:24:04 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:12.300 23:24:04 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:12.300 23:24:04 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:12.559 23:24:04 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:12.559 23:24:04 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:12.559 23:24:04 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@85 -- # seq 0 3 00:16:12.559 23:24:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.559 23:24:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:12.559 23:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.559 23:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.559 [2024-07-26 23:24:04.198146] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.559 [2024-07-26 23:24:04.236618] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.559 [2024-07-26 23:24:04.238359] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.559 [2024-07-26 23:24:04.242057] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.559 [2024-07-26 23:24:04.242414] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:12.559 [2024-07-26 23:24:04.242450] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:12.559 23:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.559 23:24:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:12.559 23:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.559 23:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.559 [2024-07-26 23:24:04.257194] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.559 [2024-07-26 23:24:04.289556] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.559 [2024-07-26 23:24:04.295137] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.559 [2024-07-26 23:24:04.303996] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.559 [2024-07-26 23:24:04.304312] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:12.559 [2024-07-26 23:24:04.304334] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:12.559 23:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.559 23:24:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.559 23:24:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:12.559 23:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.559 23:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.559 [2024-07-26 23:24:04.310127] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.818 [2024-07-26 23:24:04.350077] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.818 [2024-07-26 23:24:04.355413] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.818 [2024-07-26 23:24:04.366029] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.818 [2024-07-26 23:24:04.366372] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:12.818 [2024-07-26 23:24:04.366396] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:12.818 23:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.818 23:24:04 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.818 23:24:04 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:12.818 23:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.818 23:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.818 [2024-07-26 23:24:04.374114] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.818 [2024-07-26 23:24:04.418057] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.818 [2024-07-26 23:24:04.419427] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.818 [2024-07-26 23:24:04.429047] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.818 [2024-07-26 23:24:04.429355] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:12.818 [2024-07-26 23:24:04.429369] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:12.818 23:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.818 23:24:04 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:13.077 [2024-07-26 23:24:04.607153] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:13.077 [2024-07-26 23:24:04.613955] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:13.077 [2024-07-26 23:24:04.614017] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:13.077 23:24:04 -- ublk/ublk.sh@93 -- # seq 0 3 00:16:13.077 23:24:04 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:13.077 23:24:04 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:13.077 23:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.077 23:24:04 -- common/autotest_common.sh@10 -- # set +x 00:16:13.336 23:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.336 23:24:05 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:13.336 23:24:05 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:13.336 23:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.336 23:24:05 -- common/autotest_common.sh@10 -- # set +x 00:16:13.904 23:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.904 23:24:05 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:13.904 23:24:05 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:13.904 23:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.904 23:24:05 -- common/autotest_common.sh@10 -- # set +x 00:16:14.164 23:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.164 23:24:05 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:14.164 23:24:05 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:14.164 23:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.164 23:24:05 -- common/autotest_common.sh@10 -- # set +x 00:16:14.423 23:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.423 23:24:06 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:14.423 23:24:06 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:14.423 23:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.423 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:16:14.423 23:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.423 23:24:06 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:14.423 23:24:06 -- lvol/common.sh@26 -- # jq length 00:16:14.682 23:24:06 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:14.682 23:24:06 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:14.682 23:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:14.682 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:16:14.682 23:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.682 23:24:06 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:14.682 23:24:06 -- lvol/common.sh@28 -- # jq length 00:16:14.682 ************************************ 00:16:14.682 END TEST test_create_multi_ublk 00:16:14.682 ************************************ 00:16:14.682 23:24:06 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:14.682 00:16:14.682 real 0m4.609s 00:16:14.683 user 0m0.996s 00:16:14.683 sys 0m0.230s 00:16:14.683 23:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.683 23:24:06 -- common/autotest_common.sh@10 -- # set +x 00:16:14.683 23:24:06 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:14.683 23:24:06 -- ublk/ublk.sh@147 -- # cleanup 00:16:14.683 23:24:06 -- ublk/ublk.sh@130 -- # killprocess 70600 00:16:14.683 23:24:06 -- common/autotest_common.sh@926 -- # '[' -z 70600 ']' 00:16:14.683 23:24:06 -- common/autotest_common.sh@930 -- # kill -0 70600 00:16:14.683 23:24:06 -- common/autotest_common.sh@931 -- # uname 00:16:14.683 23:24:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:14.683 23:24:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70600 00:16:14.683 killing process with pid 70600 00:16:14.683 23:24:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:14.683 23:24:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:14.683 23:24:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70600' 00:16:14.683 23:24:06 -- common/autotest_common.sh@945 -- # kill 70600 00:16:14.683 23:24:06 -- common/autotest_common.sh@950 -- # wait 70600 00:16:16.060 [2024-07-26 23:24:07.539564] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:16.061 [2024-07-26 23:24:07.539617] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:17.516 00:16:17.516 real 0m32.108s 00:16:17.516 user 0m48.623s 00:16:17.516 sys 0m7.525s 00:16:17.516 ************************************ 00:16:17.516 END TEST ublk 00:16:17.516 ************************************ 00:16:17.516 23:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.516 23:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:17.516 23:24:08 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:17.516 23:24:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:17.516 23:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.516 23:24:08 -- common/autotest_common.sh@10 -- # set +x 00:16:17.516 ************************************ 00:16:17.516 START TEST ublk_recovery 00:16:17.516 ************************************ 00:16:17.516 23:24:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:17.516 * Looking for test storage... 00:16:17.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:17.516 23:24:09 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:17.516 23:24:09 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:17.516 23:24:09 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:17.516 23:24:09 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:17.516 23:24:09 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:17.516 23:24:09 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:17.516 23:24:09 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:17.516 23:24:09 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71016 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:17.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.516 23:24:09 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71016 00:16:17.516 23:24:09 -- common/autotest_common.sh@819 -- # '[' -z 71016 ']' 00:16:17.516 23:24:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.516 23:24:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.517 23:24:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.517 23:24:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.517 23:24:09 -- common/autotest_common.sh@10 -- # set +x 00:16:17.517 [2024-07-26 23:24:09.215204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:17.517 [2024-07-26 23:24:09.215525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:16:17.775 [2024-07-26 23:24:09.390441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:18.034 [2024-07-26 23:24:09.600172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:18.034 [2024-07-26 23:24:09.600768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.034 [2024-07-26 23:24:09.600807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.971 23:24:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.971 23:24:10 -- common/autotest_common.sh@852 -- # return 0 00:16:18.971 23:24:10 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:18.971 23:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.971 23:24:10 -- common/autotest_common.sh@10 -- # set +x 00:16:18.971 [2024-07-26 23:24:10.640859] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.971 23:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.971 23:24:10 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:18.971 23:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.971 23:24:10 -- common/autotest_common.sh@10 -- # set +x 00:16:19.230 malloc0 00:16:19.230 23:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.230 23:24:10 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:19.230 23:24:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.230 23:24:10 -- common/autotest_common.sh@10 -- # set +x 00:16:19.230 [2024-07-26 23:24:10.819118] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:19.230 [2024-07-26 23:24:10.819254] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:19.230 [2024-07-26 23:24:10.819264] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:19.230 [2024-07-26 23:24:10.819275] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:19.230 [2024-07-26 23:24:10.828081] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:19.230 [2024-07-26 23:24:10.828112] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:19.230 [2024-07-26 23:24:10.834998] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:19.230 [2024-07-26 23:24:10.835161] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:19.231 [2024-07-26 23:24:10.845999] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:19.231 1 00:16:19.231 23:24:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.231 23:24:10 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:20.167 23:24:11 -- ublk/ublk_recovery.sh@31 -- # fio_proc=71057 00:16:20.167 23:24:11 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:20.167 23:24:11 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:20.426 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:20.426 fio-3.35 00:16:20.426 Starting 1 process 00:16:25.695 23:24:16 -- ublk/ublk_recovery.sh@36 -- # kill -9 71016 00:16:25.695 23:24:16 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:30.969 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71016 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:30.969 23:24:21 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71168 00:16:30.969 23:24:21 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.969 23:24:21 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:30.969 23:24:21 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71168 00:16:30.969 23:24:21 -- common/autotest_common.sh@819 -- # '[' -z 71168 ']' 00:16:30.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.969 23:24:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.969 23:24:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.969 23:24:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.969 23:24:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.969 23:24:21 -- common/autotest_common.sh@10 -- # set +x 00:16:30.969 [2024-07-26 23:24:21.987900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:30.969 [2024-07-26 23:24:21.988033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71168 ] 00:16:30.969 [2024-07-26 23:24:22.163347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.969 [2024-07-26 23:24:22.376313] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.969 [2024-07-26 23:24:22.376731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.969 [2024-07-26 23:24:22.376760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.907 23:24:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.908 23:24:23 -- common/autotest_common.sh@852 -- # return 0 00:16:31.908 23:24:23 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:31.908 23:24:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.908 23:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 [2024-07-26 23:24:23.411419] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:31.908 23:24:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.908 23:24:23 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:31.908 23:24:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.908 23:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 malloc0 00:16:31.908 23:24:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.908 23:24:23 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:31.908 23:24:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.908 23:24:23 -- common/autotest_common.sh@10 -- # set +x 00:16:31.908 [2024-07-26 23:24:23.612159] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:31.908 [2024-07-26 23:24:23.612211] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:31.908 [2024-07-26 23:24:23.612221] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:31.908 [2024-07-26 23:24:23.620033] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:31.908 [2024-07-26 23:24:23.620060] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:31.908 [2024-07-26 23:24:23.620173] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:31.908 1 00:16:31.908 23:24:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.908 23:24:23 -- ublk/ublk_recovery.sh@52 -- # wait 71057 00:16:31.908 [2024-07-26 23:24:23.628012] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:31.908 [2024-07-26 23:24:23.635602] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:31.908 [2024-07-26 23:24:23.643230] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:31.908 [2024-07-26 23:24:23.643261] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:28.144 00:17:28.144 fio_test: (groupid=0, jobs=1): err= 0: pid=71064: Fri Jul 26 23:25:12 2024 00:17:28.144 read: IOPS=20.9k, BW=81.5MiB/s (85.5MB/s)(4892MiB/60001msec) 00:17:28.144 slat (usec): min=2, max=4157, avg= 7.99, stdev= 4.45 00:17:28.144 clat (usec): min=880, max=6786.6k, avg=3053.38, stdev=51068.84 00:17:28.144 lat (usec): min=887, max=6786.6k, avg=3061.37, stdev=51068.86 00:17:28.144 clat percentiles (usec): 00:17:28.144 | 1.00th=[ 2057], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:17:28.144 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:17:28.144 | 70.00th=[ 2606], 80.00th=[ 2638], 90.00th=[ 2933], 95.00th=[ 3851], 00:17:28.144 | 99.00th=[ 5080], 99.50th=[ 5669], 99.90th=[ 6783], 99.95th=[ 7439], 00:17:28.144 | 99.99th=[12649] 00:17:28.144 bw ( KiB/s): min=42371, max=98168, per=100.00%, avg=93701.42, stdev=7840.42, samples=106 00:17:28.144 iops : min=10592, max=24542, avg=23425.32, stdev=1960.15, samples=106 00:17:28.144 write: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(4887MiB/60001msec); 0 zone resets 00:17:28.144 slat (usec): min=2, max=244, avg= 8.01, stdev= 2.43 00:17:28.144 clat (usec): min=888, max=6786.8k, avg=3064.73, stdev=45777.96 00:17:28.144 lat (usec): min=897, max=6786.9k, avg=3072.74, stdev=45777.98 00:17:28.144 clat percentiles (usec): 00:17:28.144 | 1.00th=[ 2089], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2540], 00:17:28.144 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:17:28.144 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2933], 95.00th=[ 3851], 00:17:28.144 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 6980], 99.95th=[ 7504], 00:17:28.144 | 99.99th=[ 9634] 00:17:28.144 bw ( KiB/s): min=43361, max=98528, per=100.00%, avg=93626.35, stdev=7659.60, samples=106 00:17:28.144 iops : min=10840, max=24632, avg=23406.57, stdev=1914.91, samples=106 00:17:28.144 lat (usec) : 1000=0.01% 00:17:28.144 lat (msec) : 2=0.64%, 4=95.00%, 10=4.35%, 20=0.01%, >=2000=0.01% 00:17:28.144 cpu : usr=12.72%, sys=32.37%, ctx=109109, majf=0, minf=13 00:17:28.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:28.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:28.144 issued rwts: total=1252267,1251187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:28.144 00:17:28.144 Run status group 0 (all jobs): 00:17:28.144 READ: bw=81.5MiB/s (85.5MB/s), 81.5MiB/s-81.5MiB/s (85.5MB/s-85.5MB/s), io=4892MiB (5129MB), run=60001-60001msec 00:17:28.144 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=4887MiB (5125MB), run=60001-60001msec 00:17:28.144 00:17:28.144 Disk stats (read/write): 00:17:28.144 ublkb1: ios=1249752/1248565, merge=0/0, ticks=3706173/3580318, in_queue=7286492, util=99.93% 00:17:28.144 23:25:12 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:28.144 23:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:28.144 23:25:12 -- common/autotest_common.sh@10 -- # set +x 00:17:28.144 [2024-07-26 23:25:12.138538] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:28.144 [2024-07-26 23:25:12.188054] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:28.144 [2024-07-26 23:25:12.188362] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:28.144 [2024-07-26 23:25:12.196074] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:28.144 [2024-07-26 23:25:12.196217] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:28.144 [2024-07-26 23:25:12.196231] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:28.144 23:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:28.144 23:25:12 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:28.144 23:25:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:28.144 23:25:12 -- common/autotest_common.sh@10 -- # set +x 00:17:28.144 [2024-07-26 23:25:12.212116] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:28.144 [2024-07-26 23:25:12.219003] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:28.144 [2024-07-26 23:25:12.219040] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:28.144 23:25:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:28.144 23:25:12 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:28.144 23:25:12 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:28.144 23:25:12 -- ublk/ublk_recovery.sh@14 -- # killprocess 71168 00:17:28.144 23:25:12 -- common/autotest_common.sh@926 -- # '[' -z 71168 ']' 00:17:28.144 23:25:12 -- common/autotest_common.sh@930 -- # kill -0 71168 00:17:28.144 23:25:12 -- common/autotest_common.sh@931 -- # uname 00:17:28.144 23:25:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.144 23:25:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71168 00:17:28.144 killing process with pid 71168 00:17:28.144 23:25:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.144 23:25:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.144 23:25:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71168' 00:17:28.144 23:25:12 -- common/autotest_common.sh@945 -- # kill 71168 00:17:28.144 23:25:12 -- common/autotest_common.sh@950 -- # wait 71168 00:17:28.144 [2024-07-26 23:25:13.428497] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:28.144 [2024-07-26 23:25:13.428572] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:28.144 ************************************ 00:17:28.144 END TEST ublk_recovery 00:17:28.144 ************************************ 00:17:28.144 00:17:28.144 real 1m5.978s 00:17:28.144 user 1m50.265s 00:17:28.144 sys 0m37.583s 00:17:28.144 23:25:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.144 23:25:14 -- common/autotest_common.sh@10 -- # set +x 00:17:28.144 23:25:15 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@268 -- # timing_exit lib 00:17:28.144 23:25:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:28.144 23:25:15 -- common/autotest_common.sh@10 -- # set +x 00:17:28.144 23:25:15 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:28.144 23:25:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:28.145 23:25:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:28.145 23:25:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:28.145 23:25:15 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:28.145 23:25:15 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:28.145 23:25:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:28.145 23:25:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.145 23:25:15 -- common/autotest_common.sh@10 -- # set +x 00:17:28.145 ************************************ 00:17:28.145 START TEST ftl 00:17:28.145 ************************************ 00:17:28.145 23:25:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:28.145 * Looking for test storage... 00:17:28.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:28.145 23:25:15 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:28.145 23:25:15 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:28.145 23:25:15 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:28.145 23:25:15 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:28.145 23:25:15 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:28.145 23:25:15 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:28.145 23:25:15 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.145 23:25:15 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:28.145 23:25:15 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:28.145 23:25:15 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.145 23:25:15 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.145 23:25:15 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:28.145 23:25:15 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:28.145 23:25:15 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:28.145 23:25:15 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:28.145 23:25:15 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:28.145 23:25:15 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:28.145 23:25:15 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.145 23:25:15 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.145 23:25:15 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:28.145 23:25:15 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:28.145 23:25:15 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:28.145 23:25:15 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:28.145 23:25:15 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:28.145 23:25:15 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:28.145 23:25:15 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:28.145 23:25:15 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:28.145 23:25:15 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:28.145 23:25:15 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:28.145 23:25:15 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.145 23:25:15 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:28.145 23:25:15 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:28.145 23:25:15 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:28.145 23:25:15 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:28.145 23:25:15 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:28.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:28.145 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.145 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.145 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.145 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:28.145 23:25:16 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71984 00:17:28.145 23:25:16 -- ftl/ftl.sh@38 -- # waitforlisten 71984 00:17:28.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.145 23:25:16 -- common/autotest_common.sh@819 -- # '[' -z 71984 ']' 00:17:28.145 23:25:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.145 23:25:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:28.145 23:25:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.145 23:25:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:28.145 23:25:16 -- common/autotest_common.sh@10 -- # set +x 00:17:28.145 23:25:16 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:28.145 [2024-07-26 23:25:16.246658] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:28.145 [2024-07-26 23:25:16.246789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71984 ] 00:17:28.145 [2024-07-26 23:25:16.420780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.145 [2024-07-26 23:25:16.674590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:28.145 [2024-07-26 23:25:16.674813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.145 23:25:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.145 23:25:16 -- common/autotest_common.sh@852 -- # return 0 00:17:28.145 23:25:16 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:28.145 23:25:17 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:28.145 23:25:18 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:28.145 23:25:18 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:28.145 23:25:18 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:28.145 23:25:18 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:28.145 23:25:18 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:28.145 23:25:18 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:17:28.145 23:25:18 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:28.145 23:25:18 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:17:28.145 23:25:18 -- ftl/ftl.sh@50 -- # break 00:17:28.145 23:25:18 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:17:28.145 23:25:18 -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:28.145 23:25:18 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:28.145 23:25:18 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:28.145 23:25:19 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:17:28.145 23:25:19 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:28.145 23:25:19 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:17:28.145 23:25:19 -- ftl/ftl.sh@63 -- # break 00:17:28.145 23:25:19 -- ftl/ftl.sh@66 -- # killprocess 71984 00:17:28.145 23:25:19 -- common/autotest_common.sh@926 -- # '[' -z 71984 ']' 00:17:28.145 23:25:19 -- common/autotest_common.sh@930 -- # kill -0 71984 00:17:28.145 23:25:19 -- common/autotest_common.sh@931 -- # uname 00:17:28.145 23:25:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.145 23:25:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71984 00:17:28.145 23:25:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.145 killing process with pid 71984 00:17:28.145 23:25:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.145 23:25:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71984' 00:17:28.145 23:25:19 -- common/autotest_common.sh@945 -- # kill 71984 00:17:28.145 23:25:19 -- common/autotest_common.sh@950 -- # wait 71984 00:17:30.050 23:25:21 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:17:30.050 23:25:21 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:17:30.050 23:25:21 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:30.050 23:25:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:30.050 23:25:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:30.050 23:25:21 -- common/autotest_common.sh@10 -- # set +x 00:17:30.050 ************************************ 00:17:30.050 START TEST ftl_fio_basic 00:17:30.050 ************************************ 00:17:30.050 23:25:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:30.050 * Looking for test storage... 00:17:30.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.050 23:25:21 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:30.050 23:25:21 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:30.050 23:25:21 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.050 23:25:21 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:30.050 23:25:21 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:30.050 23:25:21 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:30.050 23:25:21 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.050 23:25:21 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:30.050 23:25:21 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:30.050 23:25:21 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.050 23:25:21 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.050 23:25:21 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:30.050 23:25:21 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:30.050 23:25:21 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.050 23:25:21 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:30.050 23:25:21 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:30.050 23:25:21 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:30.050 23:25:21 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.050 23:25:21 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.050 23:25:21 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:30.050 23:25:21 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:30.050 23:25:21 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.050 23:25:21 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:30.050 23:25:21 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.050 23:25:21 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:30.050 23:25:21 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:30.050 23:25:21 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:30.050 23:25:21 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.050 23:25:21 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:30.050 23:25:21 -- ftl/fio.sh@11 -- # declare -A suite 00:17:30.050 23:25:21 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:30.050 23:25:21 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:30.050 23:25:21 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:30.050 23:25:21 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.050 23:25:21 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:17:30.050 23:25:21 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:17:30.050 23:25:21 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:30.050 23:25:21 -- ftl/fio.sh@26 -- # uuid= 00:17:30.050 23:25:21 -- ftl/fio.sh@27 -- # timeout=240 00:17:30.050 23:25:21 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:30.050 23:25:21 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:30.050 23:25:21 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:30.050 23:25:21 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:30.050 23:25:21 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.050 23:25:21 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:30.050 23:25:21 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:30.050 23:25:21 -- ftl/fio.sh@45 -- # svcpid=72118 00:17:30.050 23:25:21 -- ftl/fio.sh@46 -- # waitforlisten 72118 00:17:30.050 23:25:21 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:30.050 23:25:21 -- common/autotest_common.sh@819 -- # '[' -z 72118 ']' 00:17:30.310 23:25:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.310 23:25:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.310 23:25:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.310 23:25:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.310 23:25:21 -- common/autotest_common.sh@10 -- # set +x 00:17:30.310 [2024-07-26 23:25:21.910626] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:30.310 [2024-07-26 23:25:21.910766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72118 ] 00:17:30.569 [2024-07-26 23:25:22.087556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.829 [2024-07-26 23:25:22.342723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.829 [2024-07-26 23:25:22.343149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.829 [2024-07-26 23:25:22.343290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.829 [2024-07-26 23:25:22.343317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.733 23:25:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.733 23:25:24 -- common/autotest_common.sh@852 -- # return 0 00:17:32.733 23:25:24 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:32.733 23:25:24 -- ftl/common.sh@54 -- # local name=nvme0 00:17:32.733 23:25:24 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:32.733 23:25:24 -- ftl/common.sh@56 -- # local size=103424 00:17:32.733 23:25:24 -- ftl/common.sh@59 -- # local base_bdev 00:17:32.733 23:25:24 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:32.733 23:25:24 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:32.733 23:25:24 -- ftl/common.sh@62 -- # local base_size 00:17:32.733 23:25:24 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:32.733 23:25:24 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:32.733 23:25:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:32.733 23:25:24 -- common/autotest_common.sh@1359 -- # local bs 00:17:32.733 23:25:24 -- common/autotest_common.sh@1360 -- # local nb 00:17:32.733 23:25:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:32.733 23:25:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:32.733 { 00:17:32.733 "name": "nvme0n1", 00:17:32.733 "aliases": [ 00:17:32.733 "45870de2-d729-4010-b4ed-8eed43700446" 00:17:32.733 ], 00:17:32.733 "product_name": "NVMe disk", 00:17:32.733 "block_size": 4096, 00:17:32.733 "num_blocks": 1310720, 00:17:32.733 "uuid": "45870de2-d729-4010-b4ed-8eed43700446", 00:17:32.733 "assigned_rate_limits": { 00:17:32.733 "rw_ios_per_sec": 0, 00:17:32.733 "rw_mbytes_per_sec": 0, 00:17:32.733 "r_mbytes_per_sec": 0, 00:17:32.733 "w_mbytes_per_sec": 0 00:17:32.733 }, 00:17:32.733 "claimed": false, 00:17:32.733 "zoned": false, 00:17:32.733 "supported_io_types": { 00:17:32.733 "read": true, 00:17:32.733 "write": true, 00:17:32.733 "unmap": true, 00:17:32.733 "write_zeroes": true, 00:17:32.733 "flush": true, 00:17:32.733 "reset": true, 00:17:32.733 "compare": true, 00:17:32.733 "compare_and_write": false, 00:17:32.733 "abort": true, 00:17:32.733 "nvme_admin": true, 00:17:32.733 "nvme_io": true 00:17:32.733 }, 00:17:32.733 "driver_specific": { 00:17:32.733 "nvme": [ 00:17:32.733 { 00:17:32.733 "pci_address": "0000:00:07.0", 00:17:32.733 "trid": { 00:17:32.733 "trtype": "PCIe", 00:17:32.733 "traddr": "0000:00:07.0" 00:17:32.733 }, 00:17:32.733 "ctrlr_data": { 00:17:32.733 "cntlid": 0, 00:17:32.733 "vendor_id": "0x1b36", 00:17:32.733 "model_number": "QEMU NVMe Ctrl", 00:17:32.733 "serial_number": "12341", 00:17:32.733 "firmware_revision": "8.0.0", 00:17:32.733 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:32.733 "oacs": { 00:17:32.733 "security": 0, 00:17:32.733 "format": 1, 00:17:32.733 "firmware": 0, 00:17:32.733 "ns_manage": 1 00:17:32.733 }, 00:17:32.733 "multi_ctrlr": false, 00:17:32.733 "ana_reporting": false 00:17:32.733 }, 00:17:32.733 "vs": { 00:17:32.733 "nvme_version": "1.4" 00:17:32.733 }, 00:17:32.733 "ns_data": { 00:17:32.733 "id": 1, 00:17:32.733 "can_share": false 00:17:32.733 } 00:17:32.733 } 00:17:32.733 ], 00:17:32.733 "mp_policy": "active_passive" 00:17:32.733 } 00:17:32.733 } 00:17:32.733 ]' 00:17:32.733 23:25:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:32.733 23:25:24 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:32.992 23:25:24 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:32.992 23:25:24 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:32.992 23:25:24 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:32.992 23:25:24 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:32.992 23:25:24 -- ftl/common.sh@63 -- # base_size=5120 00:17:32.992 23:25:24 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:32.992 23:25:24 -- ftl/common.sh@67 -- # clear_lvols 00:17:32.992 23:25:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:32.992 23:25:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:32.992 23:25:24 -- ftl/common.sh@28 -- # stores= 00:17:32.992 23:25:24 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:33.252 23:25:24 -- ftl/common.sh@68 -- # lvs=812e61cb-693b-4e2d-aa62-72f170ece7ce 00:17:33.252 23:25:24 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 812e61cb-693b-4e2d-aa62-72f170ece7ce 00:17:33.511 23:25:25 -- ftl/fio.sh@48 -- # split_bdev=da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- ftl/common.sh@35 -- # local name=nvc0 00:17:33.511 23:25:25 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:33.511 23:25:25 -- ftl/common.sh@37 -- # local base_bdev=da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- ftl/common.sh@38 -- # local cache_size= 00:17:33.511 23:25:25 -- ftl/common.sh@41 -- # get_bdev_size da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- common/autotest_common.sh@1357 -- # local bdev_name=da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:33.511 23:25:25 -- common/autotest_common.sh@1359 -- # local bs 00:17:33.511 23:25:25 -- common/autotest_common.sh@1360 -- # local nb 00:17:33.511 23:25:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:33.511 23:25:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:33.511 { 00:17:33.511 "name": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:33.511 "aliases": [ 00:17:33.511 "lvs/nvme0n1p0" 00:17:33.511 ], 00:17:33.511 "product_name": "Logical Volume", 00:17:33.511 "block_size": 4096, 00:17:33.511 "num_blocks": 26476544, 00:17:33.511 "uuid": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:33.511 "assigned_rate_limits": { 00:17:33.511 "rw_ios_per_sec": 0, 00:17:33.511 "rw_mbytes_per_sec": 0, 00:17:33.511 "r_mbytes_per_sec": 0, 00:17:33.511 "w_mbytes_per_sec": 0 00:17:33.511 }, 00:17:33.511 "claimed": false, 00:17:33.511 "zoned": false, 00:17:33.511 "supported_io_types": { 00:17:33.511 "read": true, 00:17:33.511 "write": true, 00:17:33.511 "unmap": true, 00:17:33.511 "write_zeroes": true, 00:17:33.511 "flush": false, 00:17:33.511 "reset": true, 00:17:33.511 "compare": false, 00:17:33.511 "compare_and_write": false, 00:17:33.511 "abort": false, 00:17:33.511 "nvme_admin": false, 00:17:33.511 "nvme_io": false 00:17:33.511 }, 00:17:33.511 "driver_specific": { 00:17:33.511 "lvol": { 00:17:33.511 "lvol_store_uuid": "812e61cb-693b-4e2d-aa62-72f170ece7ce", 00:17:33.511 "base_bdev": "nvme0n1", 00:17:33.511 "thin_provision": true, 00:17:33.511 "snapshot": false, 00:17:33.511 "clone": false, 00:17:33.511 "esnap_clone": false 00:17:33.511 } 00:17:33.511 } 00:17:33.511 } 00:17:33.511 ]' 00:17:33.511 23:25:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:33.770 23:25:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:33.770 23:25:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:33.770 23:25:25 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:33.770 23:25:25 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:33.770 23:25:25 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:33.770 23:25:25 -- ftl/common.sh@41 -- # local base_size=5171 00:17:33.770 23:25:25 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:33.770 23:25:25 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:34.029 23:25:25 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:34.029 23:25:25 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:34.029 23:25:25 -- ftl/common.sh@48 -- # get_bdev_size da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.029 23:25:25 -- common/autotest_common.sh@1357 -- # local bdev_name=da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.029 23:25:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:34.029 23:25:25 -- common/autotest_common.sh@1359 -- # local bs 00:17:34.029 23:25:25 -- common/autotest_common.sh@1360 -- # local nb 00:17:34.029 23:25:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.029 23:25:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:34.029 { 00:17:34.029 "name": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:34.029 "aliases": [ 00:17:34.029 "lvs/nvme0n1p0" 00:17:34.029 ], 00:17:34.029 "product_name": "Logical Volume", 00:17:34.029 "block_size": 4096, 00:17:34.029 "num_blocks": 26476544, 00:17:34.029 "uuid": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:34.029 "assigned_rate_limits": { 00:17:34.029 "rw_ios_per_sec": 0, 00:17:34.029 "rw_mbytes_per_sec": 0, 00:17:34.029 "r_mbytes_per_sec": 0, 00:17:34.029 "w_mbytes_per_sec": 0 00:17:34.030 }, 00:17:34.030 "claimed": false, 00:17:34.030 "zoned": false, 00:17:34.030 "supported_io_types": { 00:17:34.030 "read": true, 00:17:34.030 "write": true, 00:17:34.030 "unmap": true, 00:17:34.030 "write_zeroes": true, 00:17:34.030 "flush": false, 00:17:34.030 "reset": true, 00:17:34.030 "compare": false, 00:17:34.030 "compare_and_write": false, 00:17:34.030 "abort": false, 00:17:34.030 "nvme_admin": false, 00:17:34.030 "nvme_io": false 00:17:34.030 }, 00:17:34.030 "driver_specific": { 00:17:34.030 "lvol": { 00:17:34.030 "lvol_store_uuid": "812e61cb-693b-4e2d-aa62-72f170ece7ce", 00:17:34.030 "base_bdev": "nvme0n1", 00:17:34.030 "thin_provision": true, 00:17:34.030 "snapshot": false, 00:17:34.030 "clone": false, 00:17:34.030 "esnap_clone": false 00:17:34.030 } 00:17:34.030 } 00:17:34.030 } 00:17:34.030 ]' 00:17:34.030 23:25:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:34.030 23:25:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:34.030 23:25:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:34.289 23:25:25 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:34.289 23:25:25 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:34.289 23:25:25 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:34.289 23:25:25 -- ftl/common.sh@48 -- # cache_size=5171 00:17:34.289 23:25:25 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:34.289 23:25:25 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:34.289 23:25:25 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:34.289 23:25:25 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:34.289 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:34.289 23:25:25 -- ftl/fio.sh@56 -- # get_bdev_size da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.289 23:25:25 -- common/autotest_common.sh@1357 -- # local bdev_name=da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.289 23:25:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:34.289 23:25:25 -- common/autotest_common.sh@1359 -- # local bs 00:17:34.289 23:25:25 -- common/autotest_common.sh@1360 -- # local nb 00:17:34.289 23:25:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da1dfc1d-7978-4094-b1ac-4daa6f53821f 00:17:34.548 23:25:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:34.548 { 00:17:34.548 "name": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:34.548 "aliases": [ 00:17:34.548 "lvs/nvme0n1p0" 00:17:34.548 ], 00:17:34.548 "product_name": "Logical Volume", 00:17:34.548 "block_size": 4096, 00:17:34.548 "num_blocks": 26476544, 00:17:34.548 "uuid": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:34.548 "assigned_rate_limits": { 00:17:34.548 "rw_ios_per_sec": 0, 00:17:34.548 "rw_mbytes_per_sec": 0, 00:17:34.548 "r_mbytes_per_sec": 0, 00:17:34.548 "w_mbytes_per_sec": 0 00:17:34.548 }, 00:17:34.548 "claimed": false, 00:17:34.548 "zoned": false, 00:17:34.548 "supported_io_types": { 00:17:34.548 "read": true, 00:17:34.548 "write": true, 00:17:34.548 "unmap": true, 00:17:34.548 "write_zeroes": true, 00:17:34.548 "flush": false, 00:17:34.548 "reset": true, 00:17:34.548 "compare": false, 00:17:34.548 "compare_and_write": false, 00:17:34.548 "abort": false, 00:17:34.548 "nvme_admin": false, 00:17:34.548 "nvme_io": false 00:17:34.548 }, 00:17:34.548 "driver_specific": { 00:17:34.548 "lvol": { 00:17:34.548 "lvol_store_uuid": "812e61cb-693b-4e2d-aa62-72f170ece7ce", 00:17:34.548 "base_bdev": "nvme0n1", 00:17:34.548 "thin_provision": true, 00:17:34.548 "snapshot": false, 00:17:34.548 "clone": false, 00:17:34.548 "esnap_clone": false 00:17:34.548 } 00:17:34.548 } 00:17:34.548 } 00:17:34.548 ]' 00:17:34.548 23:25:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:34.548 23:25:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:34.548 23:25:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:34.548 23:25:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:34.548 23:25:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:34.548 23:25:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:34.548 23:25:26 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:34.548 23:25:26 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:34.548 23:25:26 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d da1dfc1d-7978-4094-b1ac-4daa6f53821f -c nvc0n1p0 --l2p_dram_limit 60 00:17:34.809 [2024-07-26 23:25:26.383595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.383652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.809 [2024-07-26 23:25:26.383674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:34.809 [2024-07-26 23:25:26.383685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.383807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.383822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.809 [2024-07-26 23:25:26.383837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:34.809 [2024-07-26 23:25:26.383847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.383906] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.809 [2024-07-26 23:25:26.385258] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.809 [2024-07-26 23:25:26.385298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.385310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.809 [2024-07-26 23:25:26.385325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:17:34.809 [2024-07-26 23:25:26.385335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.385440] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1daeb963-3904-413f-a3ac-2fd1da9258e0 00:17:34.809 [2024-07-26 23:25:26.387832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.387870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:34.809 [2024-07-26 23:25:26.387883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:34.809 [2024-07-26 23:25:26.387907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.401475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.401511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.809 [2024-07-26 23:25:26.401526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.440 ms 00:17:34.809 [2024-07-26 23:25:26.401559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.401674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.401691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.809 [2024-07-26 23:25:26.401703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:34.809 [2024-07-26 23:25:26.401719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.401809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.401824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.809 [2024-07-26 23:25:26.401836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:34.809 [2024-07-26 23:25:26.401848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.401902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.809 [2024-07-26 23:25:26.408597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.408630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.809 [2024-07-26 23:25:26.408646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.717 ms 00:17:34.809 [2024-07-26 23:25:26.408660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.408721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.408733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.809 [2024-07-26 23:25:26.408746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:34.809 [2024-07-26 23:25:26.408756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.408814] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:34.809 [2024-07-26 23:25:26.408931] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:34.809 [2024-07-26 23:25:26.408952] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.809 [2024-07-26 23:25:26.408981] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:34.809 [2024-07-26 23:25:26.408998] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409010] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409024] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:34.809 [2024-07-26 23:25:26.409034] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.809 [2024-07-26 23:25:26.409048] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:34.809 [2024-07-26 23:25:26.409064] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:34.809 [2024-07-26 23:25:26.409078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.409091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.809 [2024-07-26 23:25:26.409107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:34.809 [2024-07-26 23:25:26.409117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.409195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.809 [2024-07-26 23:25:26.409207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.809 [2024-07-26 23:25:26.409221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:34.809 [2024-07-26 23:25:26.409231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.809 [2024-07-26 23:25:26.409347] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.809 [2024-07-26 23:25:26.409360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.809 [2024-07-26 23:25:26.409376] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.809 [2024-07-26 23:25:26.409411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409431] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.809 [2024-07-26 23:25:26.409443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.809 [2024-07-26 23:25:26.409481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.809 [2024-07-26 23:25:26.409494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:34.809 [2024-07-26 23:25:26.409508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.809 [2024-07-26 23:25:26.409518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.809 [2024-07-26 23:25:26.409530] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:34.809 [2024-07-26 23:25:26.409540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409555] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.809 [2024-07-26 23:25:26.409565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:34.809 [2024-07-26 23:25:26.409577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409587] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:34.809 [2024-07-26 23:25:26.409599] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:34.809 [2024-07-26 23:25:26.409609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.809 [2024-07-26 23:25:26.409631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.809 [2024-07-26 23:25:26.409652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.809 [2024-07-26 23:25:26.409664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:34.809 [2024-07-26 23:25:26.409674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.810 [2024-07-26 23:25:26.409686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.810 [2024-07-26 23:25:26.409696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:34.810 [2024-07-26 23:25:26.409707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.810 [2024-07-26 23:25:26.409716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.810 [2024-07-26 23:25:26.409732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:34.810 [2024-07-26 23:25:26.409741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.810 [2024-07-26 23:25:26.409753] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.810 [2024-07-26 23:25:26.409762] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:34.810 [2024-07-26 23:25:26.409774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.810 [2024-07-26 23:25:26.409783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.810 [2024-07-26 23:25:26.409818] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:34.810 [2024-07-26 23:25:26.409827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.810 [2024-07-26 23:25:26.409839] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.810 [2024-07-26 23:25:26.409849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.810 [2024-07-26 23:25:26.409862] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.810 [2024-07-26 23:25:26.409874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.810 [2024-07-26 23:25:26.409888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.810 [2024-07-26 23:25:26.409898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.810 [2024-07-26 23:25:26.409910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.810 [2024-07-26 23:25:26.409920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.810 [2024-07-26 23:25:26.409935] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.810 [2024-07-26 23:25:26.409945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.810 [2024-07-26 23:25:26.409959] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.810 [2024-07-26 23:25:26.409983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.810 [2024-07-26 23:25:26.409999] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:34.810 [2024-07-26 23:25:26.410010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:34.810 [2024-07-26 23:25:26.410024] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:34.810 [2024-07-26 23:25:26.410035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:34.810 [2024-07-26 23:25:26.410049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:34.810 [2024-07-26 23:25:26.410059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:34.810 [2024-07-26 23:25:26.410073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:34.810 [2024-07-26 23:25:26.410084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:34.810 [2024-07-26 23:25:26.410098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:34.810 [2024-07-26 23:25:26.410108] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:34.810 [2024-07-26 23:25:26.410131] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:34.810 [2024-07-26 23:25:26.410141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:34.810 [2024-07-26 23:25:26.410158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:34.810 [2024-07-26 23:25:26.410168] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.810 [2024-07-26 23:25:26.410183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.810 [2024-07-26 23:25:26.410197] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.810 [2024-07-26 23:25:26.410211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.810 [2024-07-26 23:25:26.410222] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.810 [2024-07-26 23:25:26.410235] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.810 [2024-07-26 23:25:26.410246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.410259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.810 [2024-07-26 23:25:26.410273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:17:34.810 [2024-07-26 23:25:26.410286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.439516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.439555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.810 [2024-07-26 23:25:26.439569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.186 ms 00:17:34.810 [2024-07-26 23:25:26.439596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.439701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.439716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.810 [2024-07-26 23:25:26.439727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:34.810 [2024-07-26 23:25:26.439739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.500195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.500235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.810 [2024-07-26 23:25:26.500252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.477 ms 00:17:34.810 [2024-07-26 23:25:26.500265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.500315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.500329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.810 [2024-07-26 23:25:26.500339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.810 [2024-07-26 23:25:26.500352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.501164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.501183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.810 [2024-07-26 23:25:26.501196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:34.810 [2024-07-26 23:25:26.501215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.501358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.501379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.810 [2024-07-26 23:25:26.501390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:34.810 [2024-07-26 23:25:26.501402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.538010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.810 [2024-07-26 23:25:26.538048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.810 [2024-07-26 23:25:26.538062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.607 ms 00:17:34.810 [2024-07-26 23:25:26.538077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.810 [2024-07-26 23:25:26.552836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:35.070 [2024-07-26 23:25:26.578682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.070 [2024-07-26 23:25:26.578726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:35.070 [2024-07-26 23:25:26.578745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.516 ms 00:17:35.070 [2024-07-26 23:25:26.578759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.070 [2024-07-26 23:25:26.652208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.070 [2024-07-26 23:25:26.652268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:35.070 [2024-07-26 23:25:26.652287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.500 ms 00:17:35.070 [2024-07-26 23:25:26.652303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.070 [2024-07-26 23:25:26.652356] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:35.070 [2024-07-26 23:25:26.652370] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:40.343 [2024-07-26 23:25:31.438748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.438822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:40.343 [2024-07-26 23:25:31.438844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4794.160 ms 00:17:40.343 [2024-07-26 23:25:31.438855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.439175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.439193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.343 [2024-07-26 23:25:31.439208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:17:40.343 [2024-07-26 23:25:31.439218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.475847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.475883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:40.343 [2024-07-26 23:25:31.475906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.600 ms 00:17:40.343 [2024-07-26 23:25:31.475917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.511732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.511765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:40.343 [2024-07-26 23:25:31.511787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.784 ms 00:17:40.343 [2024-07-26 23:25:31.511797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.512313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.512328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.343 [2024-07-26 23:25:31.512343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:17:40.343 [2024-07-26 23:25:31.512357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.605499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.605537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:40.343 [2024-07-26 23:25:31.605554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.215 ms 00:17:40.343 [2024-07-26 23:25:31.605581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.643951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.643997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:40.343 [2024-07-26 23:25:31.644016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.374 ms 00:17:40.343 [2024-07-26 23:25:31.644027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.649184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.649211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:40.343 [2024-07-26 23:25:31.649229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.079 ms 00:17:40.343 [2024-07-26 23:25:31.649255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.686001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.686035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.343 [2024-07-26 23:25:31.686051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.731 ms 00:17:40.343 [2024-07-26 23:25:31.686076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.686154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.686167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.343 [2024-07-26 23:25:31.686181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:40.343 [2024-07-26 23:25:31.686191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.686337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.343 [2024-07-26 23:25:31.686350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.343 [2024-07-26 23:25:31.686364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:40.343 [2024-07-26 23:25:31.686373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.343 [2024-07-26 23:25:31.687837] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5312.346 ms, result 0 00:17:40.343 { 00:17:40.343 "name": "ftl0", 00:17:40.343 "uuid": "1daeb963-3904-413f-a3ac-2fd1da9258e0" 00:17:40.343 } 00:17:40.343 23:25:31 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:40.343 23:25:31 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:40.343 23:25:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:40.343 23:25:31 -- common/autotest_common.sh@889 -- # local i 00:17:40.343 23:25:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:40.343 23:25:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:40.343 23:25:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:40.343 23:25:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:40.343 [ 00:17:40.343 { 00:17:40.343 "name": "ftl0", 00:17:40.343 "aliases": [ 00:17:40.343 "1daeb963-3904-413f-a3ac-2fd1da9258e0" 00:17:40.343 ], 00:17:40.343 "product_name": "FTL disk", 00:17:40.343 "block_size": 4096, 00:17:40.343 "num_blocks": 20971520, 00:17:40.343 "uuid": "1daeb963-3904-413f-a3ac-2fd1da9258e0", 00:17:40.343 "assigned_rate_limits": { 00:17:40.343 "rw_ios_per_sec": 0, 00:17:40.343 "rw_mbytes_per_sec": 0, 00:17:40.343 "r_mbytes_per_sec": 0, 00:17:40.343 "w_mbytes_per_sec": 0 00:17:40.343 }, 00:17:40.343 "claimed": false, 00:17:40.343 "zoned": false, 00:17:40.343 "supported_io_types": { 00:17:40.343 "read": true, 00:17:40.343 "write": true, 00:17:40.343 "unmap": true, 00:17:40.343 "write_zeroes": true, 00:17:40.343 "flush": true, 00:17:40.343 "reset": false, 00:17:40.343 "compare": false, 00:17:40.343 "compare_and_write": false, 00:17:40.343 "abort": false, 00:17:40.343 "nvme_admin": false, 00:17:40.343 "nvme_io": false 00:17:40.343 }, 00:17:40.343 "driver_specific": { 00:17:40.343 "ftl": { 00:17:40.343 "base_bdev": "da1dfc1d-7978-4094-b1ac-4daa6f53821f", 00:17:40.343 "cache": "nvc0n1p0" 00:17:40.343 } 00:17:40.343 } 00:17:40.343 } 00:17:40.343 ] 00:17:40.343 23:25:32 -- common/autotest_common.sh@895 -- # return 0 00:17:40.343 23:25:32 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:40.343 23:25:32 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:40.602 23:25:32 -- ftl/fio.sh@70 -- # echo ']}' 00:17:40.602 23:25:32 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:40.862 [2024-07-26 23:25:32.431155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.431211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.862 [2024-07-26 23:25:32.431227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.862 [2024-07-26 23:25:32.431240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.431285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.862 [2024-07-26 23:25:32.435345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.435375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.862 [2024-07-26 23:25:32.435391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:17:40.862 [2024-07-26 23:25:32.435417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.436084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.436103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.862 [2024-07-26 23:25:32.436117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:17:40.862 [2024-07-26 23:25:32.436127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.438658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.438679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.862 [2024-07-26 23:25:32.438693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:17:40.862 [2024-07-26 23:25:32.438703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.443740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.443775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:40.862 [2024-07-26 23:25:32.443794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.980 ms 00:17:40.862 [2024-07-26 23:25:32.443804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.481699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.481735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.862 [2024-07-26 23:25:32.481752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.838 ms 00:17:40.862 [2024-07-26 23:25:32.481762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.504690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.504725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.862 [2024-07-26 23:25:32.504743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.906 ms 00:17:40.862 [2024-07-26 23:25:32.504753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.505028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.505043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.862 [2024-07-26 23:25:32.505057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:40.862 [2024-07-26 23:25:32.505085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.541094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.541127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:40.862 [2024-07-26 23:25:32.541144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.999 ms 00:17:40.862 [2024-07-26 23:25:32.541153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.577272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.577303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:40.862 [2024-07-26 23:25:32.577318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.117 ms 00:17:40.862 [2024-07-26 23:25:32.577327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.862 [2024-07-26 23:25:32.612596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.862 [2024-07-26 23:25:32.612640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.862 [2024-07-26 23:25:32.612658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.269 ms 00:17:40.862 [2024-07-26 23:25:32.612668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-07-26 23:25:32.648476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.123 [2024-07-26 23:25:32.648509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.123 [2024-07-26 23:25:32.648525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.723 ms 00:17:41.123 [2024-07-26 23:25:32.648535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.123 [2024-07-26 23:25:32.648595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.123 [2024-07-26 23:25:32.648613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.123 [2024-07-26 23:25:32.648816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.648993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.124 [2024-07-26 23:25:32.649834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-07-26 23:25:32.649846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-07-26 23:25:32.649860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-07-26 23:25:32.649870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-07-26 23:25:32.649884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.125 [2024-07-26 23:25:32.649901] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.125 [2024-07-26 23:25:32.649915] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1daeb963-3904-413f-a3ac-2fd1da9258e0 00:17:41.125 [2024-07-26 23:25:32.649925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.125 [2024-07-26 23:25:32.649938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.125 [2024-07-26 23:25:32.649947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.125 [2024-07-26 23:25:32.649960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.125 [2024-07-26 23:25:32.649981] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.125 [2024-07-26 23:25:32.649995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.125 [2024-07-26 23:25:32.650005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.125 [2024-07-26 23:25:32.650016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.125 [2024-07-26 23:25:32.650025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.125 [2024-07-26 23:25:32.650041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-07-26 23:25:32.650052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.125 [2024-07-26 23:25:32.650065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:17:41.125 [2024-07-26 23:25:32.650079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.670288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-07-26 23:25:32.670319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.125 [2024-07-26 23:25:32.670334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.160 ms 00:17:41.125 [2024-07-26 23:25:32.670344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.670645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.125 [2024-07-26 23:25:32.670655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.125 [2024-07-26 23:25:32.670672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:41.125 [2024-07-26 23:25:32.670682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.739501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-07-26 23:25:32.739535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.125 [2024-07-26 23:25:32.739550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-07-26 23:25:32.739578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.739665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-07-26 23:25:32.739676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.125 [2024-07-26 23:25:32.739693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-07-26 23:25:32.739703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.739816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-07-26 23:25:32.739829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.125 [2024-07-26 23:25:32.739843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-07-26 23:25:32.739854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.739889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-07-26 23:25:32.739907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.125 [2024-07-26 23:25:32.739921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-07-26 23:25:32.739934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.125 [2024-07-26 23:25:32.875089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.125 [2024-07-26 23:25:32.875148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.125 [2024-07-26 23:25:32.875168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.125 [2024-07-26 23:25:32.875179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.919801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.919841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.385 [2024-07-26 23:25:32.919862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.919872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.385 [2024-07-26 23:25:32.920052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.385 [2024-07-26 23:25:32.920201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.385 [2024-07-26 23:25:32.920390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.385 [2024-07-26 23:25:32.920508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.385 [2024-07-26 23:25:32.920622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.385 [2024-07-26 23:25:32.920717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.385 [2024-07-26 23:25:32.920732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.385 [2024-07-26 23:25:32.920742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.385 [2024-07-26 23:25:32.920992] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.562 ms, result 0 00:17:41.385 true 00:17:41.385 23:25:32 -- ftl/fio.sh@75 -- # killprocess 72118 00:17:41.385 23:25:32 -- common/autotest_common.sh@926 -- # '[' -z 72118 ']' 00:17:41.385 23:25:32 -- common/autotest_common.sh@930 -- # kill -0 72118 00:17:41.385 23:25:32 -- common/autotest_common.sh@931 -- # uname 00:17:41.385 23:25:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.385 23:25:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72118 00:17:41.385 killing process with pid 72118 00:17:41.385 23:25:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:41.385 23:25:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:41.385 23:25:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72118' 00:17:41.385 23:25:32 -- common/autotest_common.sh@945 -- # kill 72118 00:17:41.385 23:25:32 -- common/autotest_common.sh@950 -- # wait 72118 00:17:46.651 23:25:37 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:46.651 23:25:37 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:46.651 23:25:37 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:46.651 23:25:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:46.651 23:25:37 -- common/autotest_common.sh@10 -- # set +x 00:17:46.651 23:25:37 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.651 23:25:37 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.651 23:25:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:46.651 23:25:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.651 23:25:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:46.651 23:25:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.651 23:25:37 -- common/autotest_common.sh@1320 -- # shift 00:17:46.651 23:25:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:46.651 23:25:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.651 23:25:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:46.651 23:25:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:46.651 23:25:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:46.651 23:25:37 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:46.651 23:25:37 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:46.651 23:25:37 -- common/autotest_common.sh@1326 -- # break 00:17:46.651 23:25:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:46.651 23:25:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:46.651 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:46.651 fio-3.35 00:17:46.651 Starting 1 thread 00:17:53.217 00:17:53.217 test: (groupid=0, jobs=1): err= 0: pid=72370: Fri Jul 26 23:25:44 2024 00:17:53.217 read: IOPS=864, BW=57.4MiB/s (60.2MB/s)(255MiB/4436msec) 00:17:53.217 slat (nsec): min=7089, max=28489, avg=10145.67, stdev=2390.82 00:17:53.217 clat (usec): min=346, max=861, avg=519.35, stdev=54.80 00:17:53.217 lat (usec): min=355, max=870, avg=529.50, stdev=55.34 00:17:53.217 clat percentiles (usec): 00:17:53.217 | 1.00th=[ 392], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 474], 00:17:53.217 | 30.00th=[ 482], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 545], 00:17:53.217 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 594], 00:17:53.217 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 742], 99.95th=[ 832], 00:17:53.217 | 99.99th=[ 865] 00:17:53.217 write: IOPS=870, BW=57.8MiB/s (60.6MB/s)(256MiB/4432msec); 0 zone resets 00:17:53.217 slat (nsec): min=17693, max=60769, avg=21383.21, stdev=3798.08 00:17:53.217 clat (usec): min=411, max=993, avg=592.35, stdev=60.44 00:17:53.217 lat (usec): min=429, max=1020, avg=613.73, stdev=60.55 00:17:53.217 clat percentiles (usec): 00:17:53.217 | 1.00th=[ 474], 5.00th=[ 494], 10.00th=[ 519], 20.00th=[ 562], 00:17:53.217 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:17:53.217 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 676], 00:17:53.217 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 971], 00:17:53.217 | 99.99th=[ 996] 00:17:53.217 bw ( KiB/s): min=57256, max=60792, per=100.00%, avg=59332.25, stdev=1098.46, samples=8 00:17:53.217 iops : min= 842, max= 894, avg=872.50, stdev=16.17, samples=8 00:17:53.217 lat (usec) : 500=25.26%, 750=73.87%, 1000=0.87% 00:17:53.217 cpu : usr=99.39%, sys=0.05%, ctx=12, majf=0, minf=1318 00:17:53.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:53.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.217 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:53.217 00:17:53.217 Run status group 0 (all jobs): 00:17:53.217 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=255MiB (267MB), run=4436-4436msec 00:17:53.217 WRITE: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=256MiB (269MB), run=4432-4432msec 00:17:54.152 ----------------------------------------------------- 00:17:54.152 Suppressions used: 00:17:54.152 count bytes template 00:17:54.152 1 5 /usr/src/fio/parse.c 00:17:54.152 1 8 libtcmalloc_minimal.so 00:17:54.152 1 904 libcrypto.so 00:17:54.152 ----------------------------------------------------- 00:17:54.152 00:17:54.152 23:25:45 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:54.152 23:25:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:54.152 23:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 23:25:45 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:54.152 23:25:45 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:54.152 23:25:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.152 23:25:45 -- common/autotest_common.sh@10 -- # set +x 00:17:54.152 23:25:45 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:54.152 23:25:45 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:54.152 23:25:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:54.152 23:25:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.152 23:25:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:54.152 23:25:45 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.152 23:25:45 -- common/autotest_common.sh@1320 -- # shift 00:17:54.152 23:25:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:54.152 23:25:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.152 23:25:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:54.152 23:25:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:54.152 23:25:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:54.152 23:25:45 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:54.152 23:25:45 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:54.152 23:25:45 -- common/autotest_common.sh@1326 -- # break 00:17:54.152 23:25:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:54.152 23:25:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:54.412 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:54.412 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:54.412 fio-3.35 00:17:54.412 Starting 2 threads 00:18:21.029 00:18:21.029 first_half: (groupid=0, jobs=1): err= 0: pid=72484: Fri Jul 26 23:26:12 2024 00:18:21.029 read: IOPS=2613, BW=10.2MiB/s (10.7MB/s)(255MiB/24988msec) 00:18:21.029 slat (nsec): min=3066, max=57203, avg=7790.05, stdev=3966.14 00:18:21.029 clat (usec): min=955, max=290826, avg=39404.37, stdev=19792.88 00:18:21.029 lat (usec): min=962, max=290831, avg=39412.16, stdev=19793.61 00:18:21.029 clat percentiles (msec): 00:18:21.029 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:18:21.029 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:18:21.029 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 61], 00:18:21.029 | 99.00th=[ 150], 99.50th=[ 174], 99.90th=[ 213], 99.95th=[ 239], 00:18:21.029 | 99.99th=[ 284] 00:18:21.029 write: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(256MiB/20709msec); 0 zone resets 00:18:21.029 slat (usec): min=3, max=801, avg= 9.28, stdev= 9.71 00:18:21.029 clat (usec): min=475, max=87321, avg=9502.84, stdev=15962.34 00:18:21.029 lat (usec): min=482, max=87329, avg=9512.12, stdev=15962.75 00:18:21.029 clat percentiles (usec): 00:18:21.029 | 1.00th=[ 947], 5.00th=[ 1221], 10.00th=[ 1483], 20.00th=[ 1909], 00:18:21.029 | 30.00th=[ 3326], 40.00th=[ 4752], 50.00th=[ 5473], 60.00th=[ 6128], 00:18:21.029 | 70.00th=[ 7111], 80.00th=[10159], 90.00th=[12649], 95.00th=[39060], 00:18:21.029 | 99.00th=[81265], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:18:21.029 | 99.99th=[86508] 00:18:21.029 bw ( KiB/s): min= 1824, max=41616, per=100.00%, avg=24966.10, stdev=11441.91, samples=21 00:18:21.029 iops : min= 456, max=10404, avg=6241.52, stdev=2860.48, samples=21 00:18:21.029 lat (usec) : 500=0.01%, 750=0.11%, 1000=0.66% 00:18:21.029 lat (msec) : 2=10.18%, 4=6.62%, 10=22.58%, 20=7.06%, 50=46.42% 00:18:21.029 lat (msec) : 100=5.15%, 250=1.20%, 500=0.02% 00:18:21.029 cpu : usr=99.43%, sys=0.16%, ctx=34, majf=0, minf=5551 00:18:21.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:21.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.029 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.029 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.029 second_half: (groupid=0, jobs=1): err= 0: pid=72485: Fri Jul 26 23:26:12 2024 00:18:21.029 read: IOPS=2591, BW=10.1MiB/s (10.6MB/s)(255MiB/25209msec) 00:18:21.029 slat (nsec): min=3582, max=49740, avg=9970.11, stdev=3603.74 00:18:21.029 clat (usec): min=1020, max=295626, avg=38397.30, stdev=20151.40 00:18:21.029 lat (usec): min=1032, max=295641, avg=38407.27, stdev=20151.89 00:18:21.029 clat percentiles (msec): 00:18:21.029 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 35], 00:18:21.029 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:18:21.030 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 53], 00:18:21.030 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 213], 99.95th=[ 239], 00:18:21.030 | 99.99th=[ 288] 00:18:21.030 write: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(256MiB/22764msec); 0 zone resets 00:18:21.030 slat (usec): min=4, max=1202, avg=10.69, stdev= 9.96 00:18:21.030 clat (usec): min=481, max=88883, avg=10909.16, stdev=17214.33 00:18:21.030 lat (usec): min=509, max=88890, avg=10919.85, stdev=17214.75 00:18:21.030 clat percentiles (usec): 00:18:21.030 | 1.00th=[ 996], 5.00th=[ 1287], 10.00th=[ 1516], 20.00th=[ 1795], 00:18:21.030 | 30.00th=[ 2245], 40.00th=[ 4424], 50.00th=[ 5997], 60.00th=[ 7111], 00:18:21.030 | 70.00th=[ 8586], 80.00th=[10945], 90.00th=[30278], 95.00th=[47973], 00:18:21.030 | 99.00th=[82314], 99.50th=[83362], 99.90th=[85459], 99.95th=[86508], 00:18:21.030 | 99.99th=[87557] 00:18:21.030 bw ( KiB/s): min= 1192, max=49904, per=98.99%, avg=22798.70, stdev=13511.56, samples=23 00:18:21.030 iops : min= 298, max=12476, avg=5699.65, stdev=3377.86, samples=23 00:18:21.030 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.47% 00:18:21.030 lat (msec) : 2=12.63%, 4=5.86%, 10=19.41%, 20=7.76%, 50=48.72% 00:18:21.030 lat (msec) : 100=3.75%, 250=1.32%, 500=0.01% 00:18:21.030 cpu : usr=99.27%, sys=0.25%, ctx=97, majf=0, minf=5568 00:18:21.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:21.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.030 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.030 issued rwts: total=65318,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.030 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.030 00:18:21.030 Run status group 0 (all jobs): 00:18:21.030 READ: bw=20.2MiB/s (21.2MB/s), 10.1MiB/s-10.2MiB/s (10.6MB/s-10.7MB/s), io=510MiB (535MB), run=24988-25209msec 00:18:21.030 WRITE: bw=22.5MiB/s (23.6MB/s), 11.2MiB/s-12.4MiB/s (11.8MB/s-13.0MB/s), io=512MiB (537MB), run=20709-22764msec 00:18:23.567 ----------------------------------------------------- 00:18:23.567 Suppressions used: 00:18:23.567 count bytes template 00:18:23.567 2 10 /usr/src/fio/parse.c 00:18:23.567 4 384 /usr/src/fio/iolog.c 00:18:23.567 1 8 libtcmalloc_minimal.so 00:18:23.567 1 904 libcrypto.so 00:18:23.567 ----------------------------------------------------- 00:18:23.567 00:18:23.567 23:26:15 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:23.567 23:26:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:23.567 23:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:23.567 23:26:15 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:23.567 23:26:15 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:23.567 23:26:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:23.567 23:26:15 -- common/autotest_common.sh@10 -- # set +x 00:18:23.567 23:26:15 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:23.567 23:26:15 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:23.567 23:26:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:23.567 23:26:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:23.567 23:26:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:23.567 23:26:15 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.567 23:26:15 -- common/autotest_common.sh@1320 -- # shift 00:18:23.567 23:26:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:23.567 23:26:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:23.567 23:26:15 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:23.567 23:26:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:23.567 23:26:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:23.567 23:26:15 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:23.567 23:26:15 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:23.567 23:26:15 -- common/autotest_common.sh@1326 -- # break 00:18:23.567 23:26:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:23.567 23:26:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:23.826 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:23.826 fio-3.35 00:18:23.826 Starting 1 thread 00:18:41.915 00:18:41.915 test: (groupid=0, jobs=1): err= 0: pid=72815: Fri Jul 26 23:26:32 2024 00:18:41.915 read: IOPS=6700, BW=26.2MiB/s (27.4MB/s)(255MiB/9731msec) 00:18:41.915 slat (nsec): min=3270, max=51501, avg=7982.90, stdev=3644.27 00:18:41.915 clat (usec): min=782, max=38361, avg=19090.39, stdev=925.82 00:18:41.915 lat (usec): min=786, max=38365, avg=19098.37, stdev=925.68 00:18:41.915 clat percentiles (usec): 00:18:41.915 | 1.00th=[18220], 5.00th=[18482], 10.00th=[18482], 20.00th=[18744], 00:18:41.915 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19006], 60.00th=[19006], 00:18:41.915 | 70.00th=[19268], 80.00th=[19268], 90.00th=[19530], 95.00th=[19792], 00:18:41.915 | 99.00th=[21627], 99.50th=[22152], 99.90th=[29492], 99.95th=[33817], 00:18:41.915 | 99.99th=[37487] 00:18:41.915 write: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(256MiB/6241msec); 0 zone resets 00:18:41.915 slat (usec): min=4, max=804, avg= 9.42, stdev= 9.37 00:18:41.915 clat (usec): min=667, max=86827, avg=12136.44, stdev=16218.93 00:18:41.915 lat (usec): min=675, max=86836, avg=12145.86, stdev=16219.03 00:18:41.915 clat percentiles (usec): 00:18:41.915 | 1.00th=[ 1074], 5.00th=[ 1352], 10.00th=[ 1549], 20.00th=[ 1844], 00:18:41.915 | 30.00th=[ 2180], 40.00th=[ 3064], 50.00th=[ 7242], 60.00th=[ 8586], 00:18:41.915 | 70.00th=[ 9503], 80.00th=[11469], 90.00th=[41157], 95.00th=[53740], 00:18:41.915 | 99.00th=[62653], 99.50th=[64750], 99.90th=[68682], 99.95th=[70779], 00:18:41.915 | 99.99th=[78119] 00:18:41.915 bw ( KiB/s): min=13368, max=61264, per=96.01%, avg=40329.85, stdev=13814.58, samples=13 00:18:41.915 iops : min= 3342, max=15316, avg=10082.46, stdev=3453.65, samples=13 00:18:41.915 lat (usec) : 750=0.01%, 1000=0.24% 00:18:41.915 lat (msec) : 2=12.33%, 4=8.43%, 10=15.86%, 20=53.46%, 50=6.41% 00:18:41.915 lat (msec) : 100=3.25% 00:18:41.915 cpu : usr=98.95%, sys=0.46%, ctx=22, majf=0, minf=5567 00:18:41.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.915 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:41.915 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:41.916 00:18:41.916 Run status group 0 (all jobs): 00:18:41.916 READ: bw=26.2MiB/s (27.4MB/s), 26.2MiB/s-26.2MiB/s (27.4MB/s-27.4MB/s), io=255MiB (267MB), run=9731-9731msec 00:18:41.916 WRITE: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=256MiB (268MB), run=6241-6241msec 00:18:42.851 ----------------------------------------------------- 00:18:42.851 Suppressions used: 00:18:42.851 count bytes template 00:18:42.851 1 5 /usr/src/fio/parse.c 00:18:42.851 2 192 /usr/src/fio/iolog.c 00:18:42.851 1 8 libtcmalloc_minimal.so 00:18:42.851 1 904 libcrypto.so 00:18:42.851 ----------------------------------------------------- 00:18:42.851 00:18:42.852 23:26:34 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:42.852 23:26:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:42.852 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.111 23:26:34 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.111 Remove shared memory files 00:18:43.111 23:26:34 -- ftl/fio.sh@85 -- # remove_shm 00:18:43.111 23:26:34 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:43.111 23:26:34 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:43.111 23:26:34 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:43.111 23:26:34 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56530 /dev/shm/spdk_tgt_trace.pid71016 00:18:43.111 23:26:34 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:43.111 23:26:34 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:43.111 ************************************ 00:18:43.111 END TEST ftl_fio_basic 00:18:43.111 ************************************ 00:18:43.111 00:18:43.111 real 1m13.002s 00:18:43.111 user 2m38.715s 00:18:43.111 sys 0m4.242s 00:18:43.111 23:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.111 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.111 23:26:34 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:43.111 23:26:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:43.111 23:26:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:43.111 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.111 ************************************ 00:18:43.111 START TEST ftl_bdevperf 00:18:43.111 ************************************ 00:18:43.111 23:26:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:43.111 * Looking for test storage... 00:18:43.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.111 23:26:34 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:43.111 23:26:34 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:43.371 23:26:34 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.371 23:26:34 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:43.371 23:26:34 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:43.371 23:26:34 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:43.371 23:26:34 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.371 23:26:34 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:43.371 23:26:34 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:43.371 23:26:34 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.371 23:26:34 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.371 23:26:34 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:43.371 23:26:34 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:43.371 23:26:34 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:43.371 23:26:34 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:43.371 23:26:34 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:43.371 23:26:34 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:43.371 23:26:34 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.371 23:26:34 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:43.371 23:26:34 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:43.371 23:26:34 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:43.371 23:26:34 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:43.371 23:26:34 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:43.371 23:26:34 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:43.371 23:26:34 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:43.371 23:26:34 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:43.371 23:26:34 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:43.371 23:26:34 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:43.371 23:26:34 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@13 -- # use_append= 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:43.371 23:26:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:43.371 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=73068 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:43.371 23:26:34 -- ftl/bdevperf.sh@22 -- # waitforlisten 73068 00:18:43.371 23:26:34 -- common/autotest_common.sh@819 -- # '[' -z 73068 ']' 00:18:43.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.371 23:26:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.371 23:26:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.371 23:26:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.371 23:26:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.371 23:26:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.371 [2024-07-26 23:26:34.985667] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:43.371 [2024-07-26 23:26:34.985770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73068 ] 00:18:43.631 [2024-07-26 23:26:35.154484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.631 [2024-07-26 23:26:35.361775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.199 23:26:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.199 23:26:35 -- common/autotest_common.sh@852 -- # return 0 00:18:44.199 23:26:35 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:44.199 23:26:35 -- ftl/common.sh@54 -- # local name=nvme0 00:18:44.199 23:26:35 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:44.199 23:26:35 -- ftl/common.sh@56 -- # local size=103424 00:18:44.199 23:26:35 -- ftl/common.sh@59 -- # local base_bdev 00:18:44.199 23:26:35 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:44.459 23:26:36 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:44.459 23:26:36 -- ftl/common.sh@62 -- # local base_size 00:18:44.459 23:26:36 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:44.459 23:26:36 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:44.459 23:26:36 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:44.459 23:26:36 -- common/autotest_common.sh@1359 -- # local bs 00:18:44.459 23:26:36 -- common/autotest_common.sh@1360 -- # local nb 00:18:44.459 23:26:36 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:44.459 23:26:36 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:44.459 { 00:18:44.459 "name": "nvme0n1", 00:18:44.459 "aliases": [ 00:18:44.459 "7690582b-2f14-4658-9dcc-581ad1d6d6b5" 00:18:44.459 ], 00:18:44.459 "product_name": "NVMe disk", 00:18:44.459 "block_size": 4096, 00:18:44.459 "num_blocks": 1310720, 00:18:44.459 "uuid": "7690582b-2f14-4658-9dcc-581ad1d6d6b5", 00:18:44.459 "assigned_rate_limits": { 00:18:44.459 "rw_ios_per_sec": 0, 00:18:44.459 "rw_mbytes_per_sec": 0, 00:18:44.459 "r_mbytes_per_sec": 0, 00:18:44.459 "w_mbytes_per_sec": 0 00:18:44.459 }, 00:18:44.459 "claimed": true, 00:18:44.459 "claim_type": "read_many_write_one", 00:18:44.459 "zoned": false, 00:18:44.459 "supported_io_types": { 00:18:44.459 "read": true, 00:18:44.459 "write": true, 00:18:44.459 "unmap": true, 00:18:44.459 "write_zeroes": true, 00:18:44.459 "flush": true, 00:18:44.459 "reset": true, 00:18:44.459 "compare": true, 00:18:44.459 "compare_and_write": false, 00:18:44.459 "abort": true, 00:18:44.459 "nvme_admin": true, 00:18:44.459 "nvme_io": true 00:18:44.459 }, 00:18:44.459 "driver_specific": { 00:18:44.459 "nvme": [ 00:18:44.459 { 00:18:44.459 "pci_address": "0000:00:07.0", 00:18:44.459 "trid": { 00:18:44.459 "trtype": "PCIe", 00:18:44.459 "traddr": "0000:00:07.0" 00:18:44.459 }, 00:18:44.459 "ctrlr_data": { 00:18:44.459 "cntlid": 0, 00:18:44.459 "vendor_id": "0x1b36", 00:18:44.459 "model_number": "QEMU NVMe Ctrl", 00:18:44.459 "serial_number": "12341", 00:18:44.459 "firmware_revision": "8.0.0", 00:18:44.459 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:44.459 "oacs": { 00:18:44.459 "security": 0, 00:18:44.459 "format": 1, 00:18:44.459 "firmware": 0, 00:18:44.459 "ns_manage": 1 00:18:44.459 }, 00:18:44.459 "multi_ctrlr": false, 00:18:44.459 "ana_reporting": false 00:18:44.459 }, 00:18:44.459 "vs": { 00:18:44.459 "nvme_version": "1.4" 00:18:44.459 }, 00:18:44.459 "ns_data": { 00:18:44.459 "id": 1, 00:18:44.459 "can_share": false 00:18:44.459 } 00:18:44.459 } 00:18:44.459 ], 00:18:44.459 "mp_policy": "active_passive" 00:18:44.459 } 00:18:44.459 } 00:18:44.459 ]' 00:18:44.459 23:26:36 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:44.719 23:26:36 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:44.719 23:26:36 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:44.719 23:26:36 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:44.719 23:26:36 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:44.719 23:26:36 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:44.719 23:26:36 -- ftl/common.sh@63 -- # base_size=5120 00:18:44.719 23:26:36 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:44.719 23:26:36 -- ftl/common.sh@67 -- # clear_lvols 00:18:44.719 23:26:36 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:44.719 23:26:36 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:44.978 23:26:36 -- ftl/common.sh@28 -- # stores=812e61cb-693b-4e2d-aa62-72f170ece7ce 00:18:44.978 23:26:36 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:44.978 23:26:36 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 812e61cb-693b-4e2d-aa62-72f170ece7ce 00:18:44.978 23:26:36 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:45.237 23:26:36 -- ftl/common.sh@68 -- # lvs=2b7031af-18f3-4957-a752-15a86761fb24 00:18:45.237 23:26:36 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2b7031af-18f3-4957-a752-15a86761fb24 00:18:45.496 23:26:37 -- ftl/bdevperf.sh@23 -- # split_bdev=2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- ftl/common.sh@35 -- # local name=nvc0 00:18:45.496 23:26:37 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:45.496 23:26:37 -- ftl/common.sh@37 -- # local base_bdev=2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- ftl/common.sh@38 -- # local cache_size= 00:18:45.496 23:26:37 -- ftl/common.sh@41 -- # get_bdev_size 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- common/autotest_common.sh@1357 -- # local bdev_name=2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:45.496 23:26:37 -- common/autotest_common.sh@1359 -- # local bs 00:18:45.496 23:26:37 -- common/autotest_common.sh@1360 -- # local nb 00:18:45.496 23:26:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:45.496 23:26:37 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:45.496 { 00:18:45.496 "name": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:45.496 "aliases": [ 00:18:45.496 "lvs/nvme0n1p0" 00:18:45.496 ], 00:18:45.496 "product_name": "Logical Volume", 00:18:45.496 "block_size": 4096, 00:18:45.496 "num_blocks": 26476544, 00:18:45.496 "uuid": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:45.496 "assigned_rate_limits": { 00:18:45.496 "rw_ios_per_sec": 0, 00:18:45.496 "rw_mbytes_per_sec": 0, 00:18:45.496 "r_mbytes_per_sec": 0, 00:18:45.496 "w_mbytes_per_sec": 0 00:18:45.496 }, 00:18:45.496 "claimed": false, 00:18:45.496 "zoned": false, 00:18:45.496 "supported_io_types": { 00:18:45.496 "read": true, 00:18:45.496 "write": true, 00:18:45.496 "unmap": true, 00:18:45.496 "write_zeroes": true, 00:18:45.496 "flush": false, 00:18:45.496 "reset": true, 00:18:45.496 "compare": false, 00:18:45.496 "compare_and_write": false, 00:18:45.496 "abort": false, 00:18:45.496 "nvme_admin": false, 00:18:45.496 "nvme_io": false 00:18:45.496 }, 00:18:45.496 "driver_specific": { 00:18:45.496 "lvol": { 00:18:45.496 "lvol_store_uuid": "2b7031af-18f3-4957-a752-15a86761fb24", 00:18:45.496 "base_bdev": "nvme0n1", 00:18:45.496 "thin_provision": true, 00:18:45.496 "snapshot": false, 00:18:45.496 "clone": false, 00:18:45.496 "esnap_clone": false 00:18:45.496 } 00:18:45.496 } 00:18:45.496 } 00:18:45.496 ]' 00:18:45.496 23:26:37 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:45.755 23:26:37 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:45.755 23:26:37 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:45.755 23:26:37 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:45.755 23:26:37 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:45.755 23:26:37 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:45.755 23:26:37 -- ftl/common.sh@41 -- # local base_size=5171 00:18:45.755 23:26:37 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:45.755 23:26:37 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:46.013 23:26:37 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:46.013 23:26:37 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:46.013 23:26:37 -- ftl/common.sh@48 -- # get_bdev_size 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.013 23:26:37 -- common/autotest_common.sh@1357 -- # local bdev_name=2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.013 23:26:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:46.013 23:26:37 -- common/autotest_common.sh@1359 -- # local bs 00:18:46.013 23:26:37 -- common/autotest_common.sh@1360 -- # local nb 00:18:46.013 23:26:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.013 23:26:37 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:46.013 { 00:18:46.013 "name": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:46.013 "aliases": [ 00:18:46.013 "lvs/nvme0n1p0" 00:18:46.013 ], 00:18:46.013 "product_name": "Logical Volume", 00:18:46.013 "block_size": 4096, 00:18:46.013 "num_blocks": 26476544, 00:18:46.013 "uuid": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:46.013 "assigned_rate_limits": { 00:18:46.013 "rw_ios_per_sec": 0, 00:18:46.013 "rw_mbytes_per_sec": 0, 00:18:46.013 "r_mbytes_per_sec": 0, 00:18:46.013 "w_mbytes_per_sec": 0 00:18:46.013 }, 00:18:46.013 "claimed": false, 00:18:46.013 "zoned": false, 00:18:46.013 "supported_io_types": { 00:18:46.013 "read": true, 00:18:46.013 "write": true, 00:18:46.013 "unmap": true, 00:18:46.013 "write_zeroes": true, 00:18:46.013 "flush": false, 00:18:46.013 "reset": true, 00:18:46.013 "compare": false, 00:18:46.013 "compare_and_write": false, 00:18:46.013 "abort": false, 00:18:46.013 "nvme_admin": false, 00:18:46.013 "nvme_io": false 00:18:46.013 }, 00:18:46.013 "driver_specific": { 00:18:46.013 "lvol": { 00:18:46.013 "lvol_store_uuid": "2b7031af-18f3-4957-a752-15a86761fb24", 00:18:46.013 "base_bdev": "nvme0n1", 00:18:46.013 "thin_provision": true, 00:18:46.013 "snapshot": false, 00:18:46.013 "clone": false, 00:18:46.013 "esnap_clone": false 00:18:46.013 } 00:18:46.013 } 00:18:46.013 } 00:18:46.013 ]' 00:18:46.013 23:26:37 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:46.272 23:26:37 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:46.272 23:26:37 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:46.272 23:26:37 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:46.272 23:26:37 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:46.272 23:26:37 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:46.272 23:26:37 -- ftl/common.sh@48 -- # cache_size=5171 00:18:46.272 23:26:37 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:46.272 23:26:37 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:46.272 23:26:37 -- ftl/bdevperf.sh@26 -- # get_bdev_size 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.272 23:26:37 -- common/autotest_common.sh@1357 -- # local bdev_name=2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.272 23:26:37 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:46.272 23:26:37 -- common/autotest_common.sh@1359 -- # local bs 00:18:46.272 23:26:37 -- common/autotest_common.sh@1360 -- # local nb 00:18:46.272 23:26:37 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2117f33b-3367-44b2-9dbb-72a451cae0ac 00:18:46.530 23:26:38 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:46.530 { 00:18:46.530 "name": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:46.530 "aliases": [ 00:18:46.530 "lvs/nvme0n1p0" 00:18:46.530 ], 00:18:46.530 "product_name": "Logical Volume", 00:18:46.530 "block_size": 4096, 00:18:46.530 "num_blocks": 26476544, 00:18:46.530 "uuid": "2117f33b-3367-44b2-9dbb-72a451cae0ac", 00:18:46.530 "assigned_rate_limits": { 00:18:46.530 "rw_ios_per_sec": 0, 00:18:46.530 "rw_mbytes_per_sec": 0, 00:18:46.530 "r_mbytes_per_sec": 0, 00:18:46.531 "w_mbytes_per_sec": 0 00:18:46.531 }, 00:18:46.531 "claimed": false, 00:18:46.531 "zoned": false, 00:18:46.531 "supported_io_types": { 00:18:46.531 "read": true, 00:18:46.531 "write": true, 00:18:46.531 "unmap": true, 00:18:46.531 "write_zeroes": true, 00:18:46.531 "flush": false, 00:18:46.531 "reset": true, 00:18:46.531 "compare": false, 00:18:46.531 "compare_and_write": false, 00:18:46.531 "abort": false, 00:18:46.531 "nvme_admin": false, 00:18:46.531 "nvme_io": false 00:18:46.531 }, 00:18:46.531 "driver_specific": { 00:18:46.531 "lvol": { 00:18:46.531 "lvol_store_uuid": "2b7031af-18f3-4957-a752-15a86761fb24", 00:18:46.531 "base_bdev": "nvme0n1", 00:18:46.531 "thin_provision": true, 00:18:46.531 "snapshot": false, 00:18:46.531 "clone": false, 00:18:46.531 "esnap_clone": false 00:18:46.531 } 00:18:46.531 } 00:18:46.531 } 00:18:46.531 ]' 00:18:46.531 23:26:38 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:46.531 23:26:38 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:46.531 23:26:38 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:46.531 23:26:38 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:46.531 23:26:38 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:46.531 23:26:38 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:46.531 23:26:38 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:46.531 23:26:38 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2117f33b-3367-44b2-9dbb-72a451cae0ac -c nvc0n1p0 --l2p_dram_limit 20 00:18:46.791 [2024-07-26 23:26:38.408880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-07-26 23:26:38.408924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.791 [2024-07-26 23:26:38.408941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.791 [2024-07-26 23:26:38.408951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-07-26 23:26:38.409009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-07-26 23:26:38.409021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.791 [2024-07-26 23:26:38.409034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:46.791 [2024-07-26 23:26:38.409043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-07-26 23:26:38.409063] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.791 [2024-07-26 23:26:38.410215] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.791 [2024-07-26 23:26:38.410263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-07-26 23:26:38.410273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.791 [2024-07-26 23:26:38.410286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:18:46.791 [2024-07-26 23:26:38.410296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-07-26 23:26:38.410359] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa5d96f5-3fc1-4c49-a0ce-7fcdd14d6848 00:18:46.791 [2024-07-26 23:26:38.411721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-07-26 23:26:38.411760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:46.791 [2024-07-26 23:26:38.411772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:46.791 [2024-07-26 23:26:38.411784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-07-26 23:26:38.419313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-07-26 23:26:38.419443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.791 [2024-07-26 23:26:38.419635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.497 ms 00:18:46.791 [2024-07-26 23:26:38.419659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.419754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.419771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.792 [2024-07-26 23:26:38.419782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:46.792 [2024-07-26 23:26:38.419799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.419852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.419866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.792 [2024-07-26 23:26:38.419877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:46.792 [2024-07-26 23:26:38.419890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.419933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.792 [2024-07-26 23:26:38.425726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.425842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.792 [2024-07-26 23:26:38.425986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.824 ms 00:18:46.792 [2024-07-26 23:26:38.426003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.426044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.426056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.792 [2024-07-26 23:26:38.426069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:46.792 [2024-07-26 23:26:38.426079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.426120] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:46.792 [2024-07-26 23:26:38.426236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:46.792 [2024-07-26 23:26:38.426257] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.792 [2024-07-26 23:26:38.426271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:46.792 [2024-07-26 23:26:38.426286] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426299] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426313] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:46.792 [2024-07-26 23:26:38.426323] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.792 [2024-07-26 23:26:38.426336] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:46.792 [2024-07-26 23:26:38.426346] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:46.792 [2024-07-26 23:26:38.426361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.426371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.792 [2024-07-26 23:26:38.426384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:46.792 [2024-07-26 23:26:38.426395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.426450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.792 [2024-07-26 23:26:38.426461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.792 [2024-07-26 23:26:38.426474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:46.792 [2024-07-26 23:26:38.426484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.792 [2024-07-26 23:26:38.426547] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.792 [2024-07-26 23:26:38.426561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.792 [2024-07-26 23:26:38.426575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.792 [2024-07-26 23:26:38.426607] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.792 [2024-07-26 23:26:38.426648] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.792 [2024-07-26 23:26:38.426671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.792 [2024-07-26 23:26:38.426681] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:46.792 [2024-07-26 23:26:38.426693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.792 [2024-07-26 23:26:38.426704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.792 [2024-07-26 23:26:38.426717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:46.792 [2024-07-26 23:26:38.426726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426739] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.792 [2024-07-26 23:26:38.426748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:46.792 [2024-07-26 23:26:38.426759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:46.792 [2024-07-26 23:26:38.426780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:46.792 [2024-07-26 23:26:38.426789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.792 [2024-07-26 23:26:38.426809] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426830] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.792 [2024-07-26 23:26:38.426841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.792 [2024-07-26 23:26:38.426870] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426889] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.792 [2024-07-26 23:26:38.426902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:46.792 [2024-07-26 23:26:38.426921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.792 [2024-07-26 23:26:38.426930] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:46.792 [2024-07-26 23:26:38.426942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.792 [2024-07-26 23:26:38.426952] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.792 [2024-07-26 23:26:38.427102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:46.792 [2024-07-26 23:26:38.427145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.792 [2024-07-26 23:26:38.427180] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.792 [2024-07-26 23:26:38.427210] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.792 [2024-07-26 23:26:38.427240] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.792 [2024-07-26 23:26:38.427270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.792 [2024-07-26 23:26:38.427402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.792 [2024-07-26 23:26:38.427432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.792 [2024-07-26 23:26:38.427464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.792 [2024-07-26 23:26:38.427492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.792 [2024-07-26 23:26:38.427526] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.792 [2024-07-26 23:26:38.427651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.792 [2024-07-26 23:26:38.427748] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.792 [2024-07-26 23:26:38.427849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.792 [2024-07-26 23:26:38.427903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:46.792 [2024-07-26 23:26:38.427960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:46.792 [2024-07-26 23:26:38.428218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:46.792 [2024-07-26 23:26:38.428267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:46.792 [2024-07-26 23:26:38.428300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:46.792 [2024-07-26 23:26:38.428311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:46.792 [2024-07-26 23:26:38.428324] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:46.792 [2024-07-26 23:26:38.428334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:46.792 [2024-07-26 23:26:38.428347] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:46.792 [2024-07-26 23:26:38.428357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:46.792 [2024-07-26 23:26:38.428371] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:46.792 [2024-07-26 23:26:38.428381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:46.792 [2024-07-26 23:26:38.428399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:46.793 [2024-07-26 23:26:38.428409] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.793 [2024-07-26 23:26:38.428422] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.793 [2024-07-26 23:26:38.428436] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.793 [2024-07-26 23:26:38.428448] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.793 [2024-07-26 23:26:38.428459] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.793 [2024-07-26 23:26:38.428471] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.793 [2024-07-26 23:26:38.428484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.428497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.793 [2024-07-26 23:26:38.428515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:18:46.793 [2024-07-26 23:26:38.428528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.452186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.452223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.793 [2024-07-26 23:26:38.452236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.655 ms 00:18:46.793 [2024-07-26 23:26:38.452247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.452319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.452334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.793 [2024-07-26 23:26:38.452344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:46.793 [2024-07-26 23:26:38.452356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.530171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.530209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.793 [2024-07-26 23:26:38.530222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.897 ms 00:18:46.793 [2024-07-26 23:26:38.530234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.530266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.530279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.793 [2024-07-26 23:26:38.530290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:46.793 [2024-07-26 23:26:38.530305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.530776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.530796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.793 [2024-07-26 23:26:38.530806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:18:46.793 [2024-07-26 23:26:38.530817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.793 [2024-07-26 23:26:38.530914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.793 [2024-07-26 23:26:38.530932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.793 [2024-07-26 23:26:38.530943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:46.793 [2024-07-26 23:26:38.530955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.053 [2024-07-26 23:26:38.552270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.053 [2024-07-26 23:26:38.552306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.053 [2024-07-26 23:26:38.552319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.312 ms 00:18:47.053 [2024-07-26 23:26:38.552331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.053 [2024-07-26 23:26:38.565172] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:47.053 [2024-07-26 23:26:38.570920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.053 [2024-07-26 23:26:38.570949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:47.053 [2024-07-26 23:26:38.570978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.535 ms 00:18:47.053 [2024-07-26 23:26:38.570989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.053 [2024-07-26 23:26:38.664104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.053 [2024-07-26 23:26:38.664140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:47.053 [2024-07-26 23:26:38.664157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.238 ms 00:18:47.053 [2024-07-26 23:26:38.664167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.053 [2024-07-26 23:26:38.664208] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:47.053 [2024-07-26 23:26:38.664222] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:51.249 [2024-07-26 23:26:42.297602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.249 [2024-07-26 23:26:42.297662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:51.249 [2024-07-26 23:26:42.297681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3639.290 ms 00:18:51.249 [2024-07-26 23:26:42.297692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.249 [2024-07-26 23:26:42.297883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.249 [2024-07-26 23:26:42.297900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.249 [2024-07-26 23:26:42.297913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:51.249 [2024-07-26 23:26:42.297923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.249 [2024-07-26 23:26:42.334338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.249 [2024-07-26 23:26:42.334377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:51.249 [2024-07-26 23:26:42.334394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.429 ms 00:18:51.249 [2024-07-26 23:26:42.334403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.249 [2024-07-26 23:26:42.369975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.249 [2024-07-26 23:26:42.370018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:51.249 [2024-07-26 23:26:42.370038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.579 ms 00:18:51.249 [2024-07-26 23:26:42.370048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.249 [2024-07-26 23:26:42.370488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.370502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.250 [2024-07-26 23:26:42.370514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:18:51.250 [2024-07-26 23:26:42.370526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.462725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.462762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:51.250 [2024-07-26 23:26:42.462778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.300 ms 00:18:51.250 [2024-07-26 23:26:42.462788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.500088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.500124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:51.250 [2024-07-26 23:26:42.500141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.307 ms 00:18:51.250 [2024-07-26 23:26:42.500151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.502115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.502143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:51.250 [2024-07-26 23:26:42.502159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:18:51.250 [2024-07-26 23:26:42.502168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.538544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.538579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.250 [2024-07-26 23:26:42.538595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.386 ms 00:18:51.250 [2024-07-26 23:26:42.538604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.538647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.538658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.250 [2024-07-26 23:26:42.538671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:51.250 [2024-07-26 23:26:42.538680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.538776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.250 [2024-07-26 23:26:42.538789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.250 [2024-07-26 23:26:42.538801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:51.250 [2024-07-26 23:26:42.538810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.250 [2024-07-26 23:26:42.539770] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4137.160 ms, result 0 00:18:51.250 { 00:18:51.250 "name": "ftl0", 00:18:51.250 "uuid": "aa5d96f5-3fc1-4c49-a0ce-7fcdd14d6848" 00:18:51.250 } 00:18:51.250 23:26:42 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:51.250 23:26:42 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:51.250 23:26:42 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:51.250 23:26:42 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:51.250 [2024-07-26 23:26:42.835860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:51.250 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:51.250 Zero copy mechanism will not be used. 00:18:51.250 Running I/O for 4 seconds... 00:18:55.444 00:18:55.444 Latency(us) 00:18:55.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.444 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:55.444 ftl0 : 4.00 1415.51 94.00 0.00 0.00 741.83 268.13 24529.94 00:18:55.444 =================================================================================================================== 00:18:55.444 Total : 1415.51 94.00 0.00 0.00 741.83 268.13 24529.94 00:18:55.444 [2024-07-26 23:26:46.838838] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:55.444 0 00:18:55.444 23:26:46 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:55.444 [2024-07-26 23:26:46.958608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:55.444 Running I/O for 4 seconds... 00:18:59.633 00:18:59.633 Latency(us) 00:18:59.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.633 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.633 ftl0 : 4.01 10536.33 41.16 0.00 0.00 12126.15 220.43 36847.55 00:18:59.633 =================================================================================================================== 00:18:59.633 Total : 10536.33 41.16 0.00 0.00 12126.15 0.00 36847.55 00:18:59.633 [2024-07-26 23:26:50.974663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:59.633 0 00:18:59.633 23:26:51 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:59.633 [2024-07-26 23:26:51.105883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:59.633 Running I/O for 4 seconds... 00:19:03.870 00:19:03.870 Latency(us) 00:19:03.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.870 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:03.870 Verification LBA range: start 0x0 length 0x1400000 00:19:03.870 ftl0 : 4.01 13676.54 53.42 0.00 0.00 9337.86 167.79 21055.74 00:19:03.870 =================================================================================================================== 00:19:03.870 Total : 13676.54 53.42 0.00 0.00 9337.86 0.00 21055.74 00:19:03.870 [2024-07-26 23:26:55.122629] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:03.870 0 00:19:03.870 23:26:55 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:03.870 [2024-07-26 23:26:55.303799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.870 [2024-07-26 23:26:55.303842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:03.870 [2024-07-26 23:26:55.303860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:03.870 [2024-07-26 23:26:55.303870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.303894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:03.871 [2024-07-26 23:26:55.306990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.307023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:03.871 [2024-07-26 23:26:55.307036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:19:03.871 [2024-07-26 23:26:55.307055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.309137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.309179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:03.871 [2024-07-26 23:26:55.309192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:19:03.871 [2024-07-26 23:26:55.309203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.502930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.502991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:03.871 [2024-07-26 23:26:55.503007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 194.022 ms 00:19:03.871 [2024-07-26 23:26:55.503021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.507808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.507845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:03.871 [2024-07-26 23:26:55.507857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:19:03.871 [2024-07-26 23:26:55.507869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.543118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.543161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:03.871 [2024-07-26 23:26:55.543175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.239 ms 00:19:03.871 [2024-07-26 23:26:55.543192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.565279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.565320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:03.871 [2024-07-26 23:26:55.565334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.086 ms 00:19:03.871 [2024-07-26 23:26:55.565396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.565519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.565536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:03.871 [2024-07-26 23:26:55.565550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:03.871 [2024-07-26 23:26:55.565562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.871 [2024-07-26 23:26:55.600840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.871 [2024-07-26 23:26:55.600880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:03.871 [2024-07-26 23:26:55.600893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.319 ms 00:19:03.871 [2024-07-26 23:26:55.600905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-07-26 23:26:55.635680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-07-26 23:26:55.635719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:04.131 [2024-07-26 23:26:55.635732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.797 ms 00:19:04.131 [2024-07-26 23:26:55.635746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-07-26 23:26:55.669953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-07-26 23:26:55.670003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:04.131 [2024-07-26 23:26:55.670016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.228 ms 00:19:04.131 [2024-07-26 23:26:55.670028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-07-26 23:26:55.704594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.131 [2024-07-26 23:26:55.704632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:04.131 [2024-07-26 23:26:55.704645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.545 ms 00:19:04.131 [2024-07-26 23:26:55.704656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.131 [2024-07-26 23:26:55.704689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:04.131 [2024-07-26 23:26:55.704708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.704991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:04.131 [2024-07-26 23:26:55.705271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:04.132 [2024-07-26 23:26:55.705898] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:04.132 [2024-07-26 23:26:55.705907] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa5d96f5-3fc1-4c49-a0ce-7fcdd14d6848 00:19:04.132 [2024-07-26 23:26:55.705922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:04.132 [2024-07-26 23:26:55.705932] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:04.132 [2024-07-26 23:26:55.705943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:04.132 [2024-07-26 23:26:55.705953] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:04.132 [2024-07-26 23:26:55.705973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:04.132 [2024-07-26 23:26:55.705983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:04.132 [2024-07-26 23:26:55.705995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:04.132 [2024-07-26 23:26:55.706003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:04.132 [2024-07-26 23:26:55.706014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:04.132 [2024-07-26 23:26:55.706023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.132 [2024-07-26 23:26:55.706040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:04.132 [2024-07-26 23:26:55.706051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:19:04.132 [2024-07-26 23:26:55.706062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.724254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.132 [2024-07-26 23:26:55.724291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:04.132 [2024-07-26 23:26:55.724304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.178 ms 00:19:04.132 [2024-07-26 23:26:55.724318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.724546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.132 [2024-07-26 23:26:55.724560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:04.132 [2024-07-26 23:26:55.724570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:04.132 [2024-07-26 23:26:55.724582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.778428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.132 [2024-07-26 23:26:55.778465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.132 [2024-07-26 23:26:55.778477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.132 [2024-07-26 23:26:55.778489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.778538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.132 [2024-07-26 23:26:55.778551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.132 [2024-07-26 23:26:55.778560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.132 [2024-07-26 23:26:55.778572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.778636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.132 [2024-07-26 23:26:55.778651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.132 [2024-07-26 23:26:55.778661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.132 [2024-07-26 23:26:55.778676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.132 [2024-07-26 23:26:55.778692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.132 [2024-07-26 23:26:55.778708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.132 [2024-07-26 23:26:55.778717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.132 [2024-07-26 23:26:55.778729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.886496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.886559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.392 [2024-07-26 23:26:55.886573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.886585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.928912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.928952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.392 [2024-07-26 23:26:55.928980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.928993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.392 [2024-07-26 23:26:55.929104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.392 [2024-07-26 23:26:55.929187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.392 [2024-07-26 23:26:55.929331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929342] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:04.392 [2024-07-26 23:26:55.929402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.392 [2024-07-26 23:26:55.929474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:04.392 [2024-07-26 23:26:55.929545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.392 [2024-07-26 23:26:55.929558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:04.392 [2024-07-26 23:26:55.929569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.392 [2024-07-26 23:26:55.929683] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 626.866 ms, result 0 00:19:04.392 true 00:19:04.392 23:26:55 -- ftl/bdevperf.sh@37 -- # killprocess 73068 00:19:04.392 23:26:55 -- common/autotest_common.sh@926 -- # '[' -z 73068 ']' 00:19:04.392 23:26:55 -- common/autotest_common.sh@930 -- # kill -0 73068 00:19:04.392 23:26:55 -- common/autotest_common.sh@931 -- # uname 00:19:04.392 23:26:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:04.392 23:26:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73068 00:19:04.392 killing process with pid 73068 00:19:04.392 Received shutdown signal, test time was about 4.000000 seconds 00:19:04.392 00:19:04.392 Latency(us) 00:19:04.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.392 =================================================================================================================== 00:19:04.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.392 23:26:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:04.392 23:26:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:04.392 23:26:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73068' 00:19:04.392 23:26:55 -- common/autotest_common.sh@945 -- # kill 73068 00:19:04.392 23:26:55 -- common/autotest_common.sh@950 -- # wait 73068 00:19:05.770 23:26:57 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:05.770 23:26:57 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:05.770 23:26:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:05.770 23:26:57 -- common/autotest_common.sh@10 -- # set +x 00:19:05.770 Remove shared memory files 00:19:05.770 23:26:57 -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:05.770 23:26:57 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:05.770 23:26:57 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:05.770 23:26:57 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:05.770 23:26:57 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:05.770 23:26:57 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:05.770 23:26:57 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:05.770 ************************************ 00:19:05.770 END TEST ftl_bdevperf 00:19:05.770 ************************************ 00:19:05.770 00:19:05.770 real 0m22.713s 00:19:05.770 user 0m24.843s 00:19:05.770 sys 0m1.185s 00:19:05.770 23:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.770 23:26:57 -- common/autotest_common.sh@10 -- # set +x 00:19:05.770 23:26:57 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:05.770 23:26:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:05.770 23:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:05.770 23:26:57 -- common/autotest_common.sh@10 -- # set +x 00:19:05.770 ************************************ 00:19:05.770 START TEST ftl_trim 00:19:05.770 ************************************ 00:19:05.770 23:26:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:06.030 * Looking for test storage... 00:19:06.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.030 23:26:57 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:06.030 23:26:57 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:06.030 23:26:57 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.030 23:26:57 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.030 23:26:57 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:06.030 23:26:57 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:06.030 23:26:57 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.030 23:26:57 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:06.030 23:26:57 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:06.030 23:26:57 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.030 23:26:57 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.030 23:26:57 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:06.030 23:26:57 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:06.030 23:26:57 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.030 23:26:57 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.030 23:26:57 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:06.030 23:26:57 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:06.030 23:26:57 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.030 23:26:57 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.030 23:26:57 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:06.030 23:26:57 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:06.030 23:26:57 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.030 23:26:57 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.030 23:26:57 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.030 23:26:57 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.030 23:26:57 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:06.030 23:26:57 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:06.030 23:26:57 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.030 23:26:57 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.030 23:26:57 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.030 23:26:57 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:19:06.030 23:26:57 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:19:06.030 23:26:57 -- ftl/trim.sh@25 -- # timeout=240 00:19:06.030 23:26:57 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:06.030 23:26:57 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:06.030 23:26:57 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:06.030 23:26:57 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:06.030 23:26:57 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:06.030 23:26:57 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.030 23:26:57 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.030 23:26:57 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:06.030 23:26:57 -- ftl/trim.sh@40 -- # svcpid=73428 00:19:06.030 23:26:57 -- ftl/trim.sh@41 -- # waitforlisten 73428 00:19:06.030 23:26:57 -- common/autotest_common.sh@819 -- # '[' -z 73428 ']' 00:19:06.030 23:26:57 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:06.030 23:26:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.030 23:26:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:06.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.030 23:26:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.030 23:26:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:06.030 23:26:57 -- common/autotest_common.sh@10 -- # set +x 00:19:06.289 [2024-07-26 23:26:57.806169] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:06.289 [2024-07-26 23:26:57.806416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73428 ] 00:19:06.289 [2024-07-26 23:26:57.974404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.548 [2024-07-26 23:26:58.185864] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:06.548 [2024-07-26 23:26:58.186435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.549 [2024-07-26 23:26:58.186579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.549 [2024-07-26 23:26:58.186613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.485 23:26:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:07.485 23:26:59 -- common/autotest_common.sh@852 -- # return 0 00:19:07.485 23:26:59 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:07.485 23:26:59 -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.485 23:26:59 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:07.485 23:26:59 -- ftl/common.sh@56 -- # local size=103424 00:19:07.485 23:26:59 -- ftl/common.sh@59 -- # local base_bdev 00:19:07.485 23:26:59 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:07.744 23:26:59 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.744 23:26:59 -- ftl/common.sh@62 -- # local base_size 00:19:07.744 23:26:59 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.744 23:26:59 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:19:07.744 23:26:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:07.744 23:26:59 -- common/autotest_common.sh@1359 -- # local bs 00:19:07.744 23:26:59 -- common/autotest_common.sh@1360 -- # local nb 00:19:08.003 23:26:59 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:08.003 23:26:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:08.003 { 00:19:08.003 "name": "nvme0n1", 00:19:08.003 "aliases": [ 00:19:08.003 "c6b569f7-03a7-4278-957d-bd1e705b0c33" 00:19:08.003 ], 00:19:08.003 "product_name": "NVMe disk", 00:19:08.003 "block_size": 4096, 00:19:08.003 "num_blocks": 1310720, 00:19:08.003 "uuid": "c6b569f7-03a7-4278-957d-bd1e705b0c33", 00:19:08.003 "assigned_rate_limits": { 00:19:08.003 "rw_ios_per_sec": 0, 00:19:08.003 "rw_mbytes_per_sec": 0, 00:19:08.003 "r_mbytes_per_sec": 0, 00:19:08.003 "w_mbytes_per_sec": 0 00:19:08.003 }, 00:19:08.003 "claimed": true, 00:19:08.003 "claim_type": "read_many_write_one", 00:19:08.003 "zoned": false, 00:19:08.003 "supported_io_types": { 00:19:08.003 "read": true, 00:19:08.003 "write": true, 00:19:08.003 "unmap": true, 00:19:08.003 "write_zeroes": true, 00:19:08.003 "flush": true, 00:19:08.003 "reset": true, 00:19:08.003 "compare": true, 00:19:08.003 "compare_and_write": false, 00:19:08.003 "abort": true, 00:19:08.003 "nvme_admin": true, 00:19:08.003 "nvme_io": true 00:19:08.003 }, 00:19:08.003 "driver_specific": { 00:19:08.003 "nvme": [ 00:19:08.003 { 00:19:08.003 "pci_address": "0000:00:07.0", 00:19:08.003 "trid": { 00:19:08.003 "trtype": "PCIe", 00:19:08.003 "traddr": "0000:00:07.0" 00:19:08.003 }, 00:19:08.003 "ctrlr_data": { 00:19:08.003 "cntlid": 0, 00:19:08.003 "vendor_id": "0x1b36", 00:19:08.003 "model_number": "QEMU NVMe Ctrl", 00:19:08.003 "serial_number": "12341", 00:19:08.003 "firmware_revision": "8.0.0", 00:19:08.003 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:08.003 "oacs": { 00:19:08.003 "security": 0, 00:19:08.003 "format": 1, 00:19:08.003 "firmware": 0, 00:19:08.003 "ns_manage": 1 00:19:08.003 }, 00:19:08.003 "multi_ctrlr": false, 00:19:08.003 "ana_reporting": false 00:19:08.003 }, 00:19:08.003 "vs": { 00:19:08.003 "nvme_version": "1.4" 00:19:08.003 }, 00:19:08.003 "ns_data": { 00:19:08.003 "id": 1, 00:19:08.003 "can_share": false 00:19:08.003 } 00:19:08.003 } 00:19:08.003 ], 00:19:08.003 "mp_policy": "active_passive" 00:19:08.003 } 00:19:08.003 } 00:19:08.003 ]' 00:19:08.003 23:26:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:08.003 23:26:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:08.003 23:26:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:08.003 23:26:59 -- common/autotest_common.sh@1363 -- # nb=1310720 00:19:08.003 23:26:59 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:19:08.003 23:26:59 -- common/autotest_common.sh@1367 -- # echo 5120 00:19:08.003 23:26:59 -- ftl/common.sh@63 -- # base_size=5120 00:19:08.003 23:26:59 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:08.003 23:26:59 -- ftl/common.sh@67 -- # clear_lvols 00:19:08.003 23:26:59 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.003 23:26:59 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:08.262 23:26:59 -- ftl/common.sh@28 -- # stores=2b7031af-18f3-4957-a752-15a86761fb24 00:19:08.262 23:26:59 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.262 23:26:59 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b7031af-18f3-4957-a752-15a86761fb24 00:19:08.520 23:27:00 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:08.778 23:27:00 -- ftl/common.sh@68 -- # lvs=b1cbebb8-c620-4c32-8cf3-207bf9b36ab1 00:19:08.779 23:27:00 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b1cbebb8-c620-4c32-8cf3-207bf9b36ab1 00:19:08.779 23:27:00 -- ftl/trim.sh@43 -- # split_bdev=bb10d842-f7da-4122-9355-9547a11e9626 00:19:08.779 23:27:00 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 bb10d842-f7da-4122-9355-9547a11e9626 00:19:08.779 23:27:00 -- ftl/common.sh@35 -- # local name=nvc0 00:19:08.779 23:27:00 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:08.779 23:27:00 -- ftl/common.sh@37 -- # local base_bdev=bb10d842-f7da-4122-9355-9547a11e9626 00:19:08.779 23:27:00 -- ftl/common.sh@38 -- # local cache_size= 00:19:08.779 23:27:00 -- ftl/common.sh@41 -- # get_bdev_size bb10d842-f7da-4122-9355-9547a11e9626 00:19:08.779 23:27:00 -- common/autotest_common.sh@1357 -- # local bdev_name=bb10d842-f7da-4122-9355-9547a11e9626 00:19:08.779 23:27:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:08.779 23:27:00 -- common/autotest_common.sh@1359 -- # local bs 00:19:08.779 23:27:00 -- common/autotest_common.sh@1360 -- # local nb 00:19:08.779 23:27:00 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.037 23:27:00 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:09.037 { 00:19:09.037 "name": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:09.037 "aliases": [ 00:19:09.037 "lvs/nvme0n1p0" 00:19:09.037 ], 00:19:09.037 "product_name": "Logical Volume", 00:19:09.037 "block_size": 4096, 00:19:09.037 "num_blocks": 26476544, 00:19:09.037 "uuid": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:09.037 "assigned_rate_limits": { 00:19:09.037 "rw_ios_per_sec": 0, 00:19:09.037 "rw_mbytes_per_sec": 0, 00:19:09.037 "r_mbytes_per_sec": 0, 00:19:09.037 "w_mbytes_per_sec": 0 00:19:09.037 }, 00:19:09.037 "claimed": false, 00:19:09.037 "zoned": false, 00:19:09.037 "supported_io_types": { 00:19:09.037 "read": true, 00:19:09.037 "write": true, 00:19:09.037 "unmap": true, 00:19:09.037 "write_zeroes": true, 00:19:09.037 "flush": false, 00:19:09.037 "reset": true, 00:19:09.037 "compare": false, 00:19:09.037 "compare_and_write": false, 00:19:09.037 "abort": false, 00:19:09.037 "nvme_admin": false, 00:19:09.037 "nvme_io": false 00:19:09.037 }, 00:19:09.037 "driver_specific": { 00:19:09.037 "lvol": { 00:19:09.037 "lvol_store_uuid": "b1cbebb8-c620-4c32-8cf3-207bf9b36ab1", 00:19:09.037 "base_bdev": "nvme0n1", 00:19:09.037 "thin_provision": true, 00:19:09.037 "snapshot": false, 00:19:09.037 "clone": false, 00:19:09.037 "esnap_clone": false 00:19:09.037 } 00:19:09.037 } 00:19:09.037 } 00:19:09.037 ]' 00:19:09.037 23:27:00 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:09.037 23:27:00 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:09.037 23:27:00 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:09.037 23:27:00 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:09.037 23:27:00 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:09.037 23:27:00 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:09.037 23:27:00 -- ftl/common.sh@41 -- # local base_size=5171 00:19:09.037 23:27:00 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:09.037 23:27:00 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:09.295 23:27:00 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.295 23:27:00 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.295 23:27:00 -- ftl/common.sh@48 -- # get_bdev_size bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.295 23:27:00 -- common/autotest_common.sh@1357 -- # local bdev_name=bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.295 23:27:00 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:09.295 23:27:00 -- common/autotest_common.sh@1359 -- # local bs 00:19:09.295 23:27:00 -- common/autotest_common.sh@1360 -- # local nb 00:19:09.295 23:27:00 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.553 23:27:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:09.553 { 00:19:09.553 "name": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:09.553 "aliases": [ 00:19:09.553 "lvs/nvme0n1p0" 00:19:09.553 ], 00:19:09.553 "product_name": "Logical Volume", 00:19:09.553 "block_size": 4096, 00:19:09.553 "num_blocks": 26476544, 00:19:09.553 "uuid": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:09.553 "assigned_rate_limits": { 00:19:09.553 "rw_ios_per_sec": 0, 00:19:09.553 "rw_mbytes_per_sec": 0, 00:19:09.553 "r_mbytes_per_sec": 0, 00:19:09.553 "w_mbytes_per_sec": 0 00:19:09.553 }, 00:19:09.553 "claimed": false, 00:19:09.553 "zoned": false, 00:19:09.553 "supported_io_types": { 00:19:09.553 "read": true, 00:19:09.553 "write": true, 00:19:09.553 "unmap": true, 00:19:09.553 "write_zeroes": true, 00:19:09.553 "flush": false, 00:19:09.553 "reset": true, 00:19:09.553 "compare": false, 00:19:09.553 "compare_and_write": false, 00:19:09.553 "abort": false, 00:19:09.553 "nvme_admin": false, 00:19:09.553 "nvme_io": false 00:19:09.553 }, 00:19:09.553 "driver_specific": { 00:19:09.553 "lvol": { 00:19:09.553 "lvol_store_uuid": "b1cbebb8-c620-4c32-8cf3-207bf9b36ab1", 00:19:09.553 "base_bdev": "nvme0n1", 00:19:09.553 "thin_provision": true, 00:19:09.553 "snapshot": false, 00:19:09.553 "clone": false, 00:19:09.553 "esnap_clone": false 00:19:09.553 } 00:19:09.553 } 00:19:09.553 } 00:19:09.553 ]' 00:19:09.553 23:27:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:09.553 23:27:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:09.553 23:27:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:09.553 23:27:01 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:09.553 23:27:01 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:09.553 23:27:01 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:09.553 23:27:01 -- ftl/common.sh@48 -- # cache_size=5171 00:19:09.553 23:27:01 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:09.812 23:27:01 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:09.813 23:27:01 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:09.813 23:27:01 -- ftl/trim.sh@47 -- # get_bdev_size bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.813 23:27:01 -- common/autotest_common.sh@1357 -- # local bdev_name=bb10d842-f7da-4122-9355-9547a11e9626 00:19:09.813 23:27:01 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:09.813 23:27:01 -- common/autotest_common.sh@1359 -- # local bs 00:19:09.813 23:27:01 -- common/autotest_common.sh@1360 -- # local nb 00:19:09.813 23:27:01 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bb10d842-f7da-4122-9355-9547a11e9626 00:19:10.072 23:27:01 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:10.072 { 00:19:10.072 "name": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:10.072 "aliases": [ 00:19:10.072 "lvs/nvme0n1p0" 00:19:10.072 ], 00:19:10.072 "product_name": "Logical Volume", 00:19:10.072 "block_size": 4096, 00:19:10.072 "num_blocks": 26476544, 00:19:10.072 "uuid": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:10.072 "assigned_rate_limits": { 00:19:10.072 "rw_ios_per_sec": 0, 00:19:10.072 "rw_mbytes_per_sec": 0, 00:19:10.072 "r_mbytes_per_sec": 0, 00:19:10.072 "w_mbytes_per_sec": 0 00:19:10.072 }, 00:19:10.072 "claimed": false, 00:19:10.072 "zoned": false, 00:19:10.072 "supported_io_types": { 00:19:10.072 "read": true, 00:19:10.072 "write": true, 00:19:10.072 "unmap": true, 00:19:10.072 "write_zeroes": true, 00:19:10.072 "flush": false, 00:19:10.072 "reset": true, 00:19:10.072 "compare": false, 00:19:10.072 "compare_and_write": false, 00:19:10.072 "abort": false, 00:19:10.072 "nvme_admin": false, 00:19:10.072 "nvme_io": false 00:19:10.072 }, 00:19:10.072 "driver_specific": { 00:19:10.072 "lvol": { 00:19:10.072 "lvol_store_uuid": "b1cbebb8-c620-4c32-8cf3-207bf9b36ab1", 00:19:10.072 "base_bdev": "nvme0n1", 00:19:10.072 "thin_provision": true, 00:19:10.072 "snapshot": false, 00:19:10.072 "clone": false, 00:19:10.072 "esnap_clone": false 00:19:10.072 } 00:19:10.072 } 00:19:10.072 } 00:19:10.072 ]' 00:19:10.072 23:27:01 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:10.072 23:27:01 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:10.072 23:27:01 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:10.072 23:27:01 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:10.072 23:27:01 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:10.072 23:27:01 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:10.072 23:27:01 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:10.072 23:27:01 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bb10d842-f7da-4122-9355-9547a11e9626 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:10.333 [2024-07-26 23:27:01.888476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.888519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.333 [2024-07-26 23:27:01.888542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:10.333 [2024-07-26 23:27:01.888553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.891819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.891859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.333 [2024-07-26 23:27:01.891875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:19:10.333 [2024-07-26 23:27:01.891885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.892052] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.333 [2024-07-26 23:27:01.893181] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.333 [2024-07-26 23:27:01.893220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.893232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.333 [2024-07-26 23:27:01.893246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:19:10.333 [2024-07-26 23:27:01.893257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.893388] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:10.333 [2024-07-26 23:27:01.894785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.894823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:10.333 [2024-07-26 23:27:01.894835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:10.333 [2024-07-26 23:27:01.894847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.902308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.902343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.333 [2024-07-26 23:27:01.902355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.334 ms 00:19:10.333 [2024-07-26 23:27:01.902368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.902534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.902553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.333 [2024-07-26 23:27:01.902564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:10.333 [2024-07-26 23:27:01.902580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.902641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.902656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.333 [2024-07-26 23:27:01.902669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:10.333 [2024-07-26 23:27:01.902681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.902744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:10.333 [2024-07-26 23:27:01.908485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.908519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.333 [2024-07-26 23:27:01.908533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.754 ms 00:19:10.333 [2024-07-26 23:27:01.908543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.908640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.908652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.333 [2024-07-26 23:27:01.908666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:10.333 [2024-07-26 23:27:01.908676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.333 [2024-07-26 23:27:01.908731] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:10.333 [2024-07-26 23:27:01.908853] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:10.333 [2024-07-26 23:27:01.908874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.333 [2024-07-26 23:27:01.908887] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:10.333 [2024-07-26 23:27:01.908903] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.333 [2024-07-26 23:27:01.908915] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.333 [2024-07-26 23:27:01.908929] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:10.333 [2024-07-26 23:27:01.908940] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.333 [2024-07-26 23:27:01.908956] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:10.333 [2024-07-26 23:27:01.908983] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:10.333 [2024-07-26 23:27:01.908996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.333 [2024-07-26 23:27:01.909006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.334 [2024-07-26 23:27:01.909019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:19:10.334 [2024-07-26 23:27:01.909029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.909145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.909157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.334 [2024-07-26 23:27:01.909176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:10.334 [2024-07-26 23:27:01.909188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.909340] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.334 [2024-07-26 23:27:01.909353] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.334 [2024-07-26 23:27:01.909366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.334 [2024-07-26 23:27:01.909399] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.334 [2024-07-26 23:27:01.909431] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.334 [2024-07-26 23:27:01.909454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.334 [2024-07-26 23:27:01.909463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:10.334 [2024-07-26 23:27:01.909476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.334 [2024-07-26 23:27:01.909485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.334 [2024-07-26 23:27:01.909496] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:10.334 [2024-07-26 23:27:01.909506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.334 [2024-07-26 23:27:01.909529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:10.334 [2024-07-26 23:27:01.909540] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:10.334 [2024-07-26 23:27:01.909559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:10.334 [2024-07-26 23:27:01.909568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.334 [2024-07-26 23:27:01.909592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.334 [2024-07-26 23:27:01.909623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909643] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.334 [2024-07-26 23:27:01.909652] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.334 [2024-07-26 23:27:01.909685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.334 [2024-07-26 23:27:01.909714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.334 [2024-07-26 23:27:01.909736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.334 [2024-07-26 23:27:01.909748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:10.334 [2024-07-26 23:27:01.909757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.334 [2024-07-26 23:27:01.909767] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.334 [2024-07-26 23:27:01.909777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.334 [2024-07-26 23:27:01.909789] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.334 [2024-07-26 23:27:01.909809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.334 [2024-07-26 23:27:01.909819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.334 [2024-07-26 23:27:01.909831] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.334 [2024-07-26 23:27:01.909840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.334 [2024-07-26 23:27:01.909853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.334 [2024-07-26 23:27:01.909862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.334 [2024-07-26 23:27:01.909874] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.334 [2024-07-26 23:27:01.909886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.334 [2024-07-26 23:27:01.909903] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:10.334 [2024-07-26 23:27:01.909913] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:10.334 [2024-07-26 23:27:01.909925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:10.334 [2024-07-26 23:27:01.909936] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:10.334 [2024-07-26 23:27:01.909949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:10.334 [2024-07-26 23:27:01.909959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:10.334 [2024-07-26 23:27:01.909982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:10.334 [2024-07-26 23:27:01.909992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:10.334 [2024-07-26 23:27:01.910005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:10.334 [2024-07-26 23:27:01.910015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:10.334 [2024-07-26 23:27:01.910038] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:10.334 [2024-07-26 23:27:01.910049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:10.334 [2024-07-26 23:27:01.910067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:10.334 [2024-07-26 23:27:01.910078] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.334 [2024-07-26 23:27:01.910090] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.334 [2024-07-26 23:27:01.910101] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.334 [2024-07-26 23:27:01.910114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.334 [2024-07-26 23:27:01.910125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.334 [2024-07-26 23:27:01.910138] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.334 [2024-07-26 23:27:01.910150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.910163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.334 [2024-07-26 23:27:01.910173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:19:10.334 [2024-07-26 23:27:01.910184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.934929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.934979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.334 [2024-07-26 23:27:01.934996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.631 ms 00:19:10.334 [2024-07-26 23:27:01.935008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.935156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.935174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.334 [2024-07-26 23:27:01.935186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:10.334 [2024-07-26 23:27:01.935197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.988926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.988977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.334 [2024-07-26 23:27:01.988991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.753 ms 00:19:10.334 [2024-07-26 23:27:01.989003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.334 [2024-07-26 23:27:01.989117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.334 [2024-07-26 23:27:01.989150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.334 [2024-07-26 23:27:01.989161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:10.334 [2024-07-26 23:27:01.989173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.335 [2024-07-26 23:27:01.989639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.335 [2024-07-26 23:27:01.989669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.335 [2024-07-26 23:27:01.989680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:19:10.335 [2024-07-26 23:27:01.989691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.335 [2024-07-26 23:27:01.989815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.335 [2024-07-26 23:27:01.989832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.335 [2024-07-26 23:27:01.989843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:10.335 [2024-07-26 23:27:01.989855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.335 [2024-07-26 23:27:02.024082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.335 [2024-07-26 23:27:02.024123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.335 [2024-07-26 23:27:02.024138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.226 ms 00:19:10.335 [2024-07-26 23:27:02.024154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.335 [2024-07-26 23:27:02.037622] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:10.335 [2024-07-26 23:27:02.053849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.335 [2024-07-26 23:27:02.053891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.335 [2024-07-26 23:27:02.053907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.572 ms 00:19:10.335 [2024-07-26 23:27:02.053918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.595 [2024-07-26 23:27:02.162001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.595 [2024-07-26 23:27:02.162045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:10.595 [2024-07-26 23:27:02.162062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.114 ms 00:19:10.595 [2024-07-26 23:27:02.162073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.595 [2024-07-26 23:27:02.162192] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:10.595 [2024-07-26 23:27:02.162209] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:14.783 [2024-07-26 23:27:05.801369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.801431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:14.783 [2024-07-26 23:27:05.801451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3645.080 ms 00:19:14.783 [2024-07-26 23:27:05.801462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:05.801727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.801742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.783 [2024-07-26 23:27:05.801757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:14.783 [2024-07-26 23:27:05.801770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:05.838087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.838126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:14.783 [2024-07-26 23:27:05.838143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.309 ms 00:19:14.783 [2024-07-26 23:27:05.838153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:05.873769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.873812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:14.783 [2024-07-26 23:27:05.873832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.560 ms 00:19:14.783 [2024-07-26 23:27:05.873842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:05.874303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.874317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.783 [2024-07-26 23:27:05.874330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:19:14.783 [2024-07-26 23:27:05.874339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:05.968071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:05.968107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:14.783 [2024-07-26 23:27:05.968123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.820 ms 00:19:14.783 [2024-07-26 23:27:05.968134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.005147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:06.005190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:14.783 [2024-07-26 23:27:06.005208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.952 ms 00:19:14.783 [2024-07-26 23:27:06.005219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.010009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:06.010042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:14.783 [2024-07-26 23:27:06.010058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:19:14.783 [2024-07-26 23:27:06.010068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.044429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:06.044465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.783 [2024-07-26 23:27:06.044480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.335 ms 00:19:14.783 [2024-07-26 23:27:06.044489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.044605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:06.044619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.783 [2024-07-26 23:27:06.044631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:14.783 [2024-07-26 23:27:06.044641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.044751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.783 [2024-07-26 23:27:06.044765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.783 [2024-07-26 23:27:06.044777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:14.783 [2024-07-26 23:27:06.044788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.783 [2024-07-26 23:27:06.045815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.783 [2024-07-26 23:27:06.050721] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4163.818 ms, result 0 00:19:14.783 [2024-07-26 23:27:06.051756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.783 { 00:19:14.783 "name": "ftl0", 00:19:14.783 "uuid": "6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d" 00:19:14.783 } 00:19:14.783 23:27:06 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:14.783 23:27:06 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:19:14.783 23:27:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:14.783 23:27:06 -- common/autotest_common.sh@889 -- # local i 00:19:14.783 23:27:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:14.783 23:27:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:14.783 23:27:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:14.783 23:27:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:14.783 [ 00:19:14.783 { 00:19:14.783 "name": "ftl0", 00:19:14.783 "aliases": [ 00:19:14.783 "6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d" 00:19:14.783 ], 00:19:14.783 "product_name": "FTL disk", 00:19:14.783 "block_size": 4096, 00:19:14.783 "num_blocks": 23592960, 00:19:14.783 "uuid": "6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d", 00:19:14.783 "assigned_rate_limits": { 00:19:14.783 "rw_ios_per_sec": 0, 00:19:14.783 "rw_mbytes_per_sec": 0, 00:19:14.783 "r_mbytes_per_sec": 0, 00:19:14.783 "w_mbytes_per_sec": 0 00:19:14.783 }, 00:19:14.783 "claimed": false, 00:19:14.783 "zoned": false, 00:19:14.783 "supported_io_types": { 00:19:14.783 "read": true, 00:19:14.783 "write": true, 00:19:14.783 "unmap": true, 00:19:14.783 "write_zeroes": true, 00:19:14.783 "flush": true, 00:19:14.783 "reset": false, 00:19:14.784 "compare": false, 00:19:14.784 "compare_and_write": false, 00:19:14.784 "abort": false, 00:19:14.784 "nvme_admin": false, 00:19:14.784 "nvme_io": false 00:19:14.784 }, 00:19:14.784 "driver_specific": { 00:19:14.784 "ftl": { 00:19:14.784 "base_bdev": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:14.784 "cache": "nvc0n1p0" 00:19:14.784 } 00:19:14.784 } 00:19:14.784 } 00:19:14.784 ] 00:19:14.784 23:27:06 -- common/autotest_common.sh@895 -- # return 0 00:19:14.784 23:27:06 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:14.784 23:27:06 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:15.043 23:27:06 -- ftl/trim.sh@56 -- # echo ']}' 00:19:15.043 23:27:06 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:15.303 23:27:06 -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:15.303 { 00:19:15.303 "name": "ftl0", 00:19:15.303 "aliases": [ 00:19:15.303 "6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d" 00:19:15.303 ], 00:19:15.303 "product_name": "FTL disk", 00:19:15.303 "block_size": 4096, 00:19:15.303 "num_blocks": 23592960, 00:19:15.303 "uuid": "6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d", 00:19:15.303 "assigned_rate_limits": { 00:19:15.303 "rw_ios_per_sec": 0, 00:19:15.303 "rw_mbytes_per_sec": 0, 00:19:15.303 "r_mbytes_per_sec": 0, 00:19:15.303 "w_mbytes_per_sec": 0 00:19:15.303 }, 00:19:15.303 "claimed": false, 00:19:15.303 "zoned": false, 00:19:15.303 "supported_io_types": { 00:19:15.303 "read": true, 00:19:15.303 "write": true, 00:19:15.303 "unmap": true, 00:19:15.303 "write_zeroes": true, 00:19:15.303 "flush": true, 00:19:15.303 "reset": false, 00:19:15.303 "compare": false, 00:19:15.303 "compare_and_write": false, 00:19:15.303 "abort": false, 00:19:15.303 "nvme_admin": false, 00:19:15.303 "nvme_io": false 00:19:15.303 }, 00:19:15.303 "driver_specific": { 00:19:15.303 "ftl": { 00:19:15.303 "base_bdev": "bb10d842-f7da-4122-9355-9547a11e9626", 00:19:15.303 "cache": "nvc0n1p0" 00:19:15.303 } 00:19:15.303 } 00:19:15.303 } 00:19:15.303 ]' 00:19:15.303 23:27:06 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:15.303 23:27:06 -- ftl/trim.sh@60 -- # nb=23592960 00:19:15.303 23:27:06 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:15.303 [2024-07-26 23:27:06.980905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:06.980944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.303 [2024-07-26 23:27:06.980957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:15.303 [2024-07-26 23:27:06.980983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:06.981043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:15.303 [2024-07-26 23:27:06.984567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:06.984598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.303 [2024-07-26 23:27:06.984612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.510 ms 00:19:15.303 [2024-07-26 23:27:06.984622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:06.985739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:06.985766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.303 [2024-07-26 23:27:06.985781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:19:15.303 [2024-07-26 23:27:06.985790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:06.988425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:06.988446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.303 [2024-07-26 23:27:06.988462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:19:15.303 [2024-07-26 23:27:06.988472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:06.993690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:06.993726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:15.303 [2024-07-26 23:27:06.993739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.162 ms 00:19:15.303 [2024-07-26 23:27:06.993748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:07.029641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:07.029677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.303 [2024-07-26 23:27:07.029692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.800 ms 00:19:15.303 [2024-07-26 23:27:07.029702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:07.051860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:07.051899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.303 [2024-07-26 23:27:07.051914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.085 ms 00:19:15.303 [2024-07-26 23:27:07.051930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.303 [2024-07-26 23:27:07.052293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.303 [2024-07-26 23:27:07.052308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.303 [2024-07-26 23:27:07.052325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:15.303 [2024-07-26 23:27:07.052337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.563 [2024-07-26 23:27:07.088871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.563 [2024-07-26 23:27:07.088907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:15.563 [2024-07-26 23:27:07.088922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.534 ms 00:19:15.563 [2024-07-26 23:27:07.088931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.563 [2024-07-26 23:27:07.123110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.563 [2024-07-26 23:27:07.123146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:15.563 [2024-07-26 23:27:07.123160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.112 ms 00:19:15.563 [2024-07-26 23:27:07.123169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.563 [2024-07-26 23:27:07.158293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.563 [2024-07-26 23:27:07.158328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.563 [2024-07-26 23:27:07.158342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.078 ms 00:19:15.563 [2024-07-26 23:27:07.158351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.563 [2024-07-26 23:27:07.191781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.563 [2024-07-26 23:27:07.191815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.563 [2024-07-26 23:27:07.191833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.289 ms 00:19:15.563 [2024-07-26 23:27:07.191842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.563 [2024-07-26 23:27:07.191945] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.563 [2024-07-26 23:27:07.191962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.191994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.563 [2024-07-26 23:27:07.192071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.192989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.564 [2024-07-26 23:27:07.193104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.565 [2024-07-26 23:27:07.193116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.565 [2024-07-26 23:27:07.193126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.565 [2024-07-26 23:27:07.193138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.565 [2024-07-26 23:27:07.193155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.565 [2024-07-26 23:27:07.193167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:15.565 [2024-07-26 23:27:07.193179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.565 [2024-07-26 23:27:07.193191] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.565 [2024-07-26 23:27:07.193200] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.565 [2024-07-26 23:27:07.193211] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.565 [2024-07-26 23:27:07.193220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.565 [2024-07-26 23:27:07.193232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.565 [2024-07-26 23:27:07.193241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.565 [2024-07-26 23:27:07.193254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.565 [2024-07-26 23:27:07.193264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.565 [2024-07-26 23:27:07.193275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.565 [2024-07-26 23:27:07.193284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.565 [2024-07-26 23:27:07.193296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:19:15.565 [2024-07-26 23:27:07.193308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.210509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.565 [2024-07-26 23:27:07.210542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.565 [2024-07-26 23:27:07.210557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.178 ms 00:19:15.565 [2024-07-26 23:27:07.210567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.210849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.565 [2024-07-26 23:27:07.210864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.565 [2024-07-26 23:27:07.210877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:15.565 [2024-07-26 23:27:07.210886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.271880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.565 [2024-07-26 23:27:07.271914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.565 [2024-07-26 23:27:07.271937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.565 [2024-07-26 23:27:07.271947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.272096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.565 [2024-07-26 23:27:07.272112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.565 [2024-07-26 23:27:07.272124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.565 [2024-07-26 23:27:07.272134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.272224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.565 [2024-07-26 23:27:07.272236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.565 [2024-07-26 23:27:07.272250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.565 [2024-07-26 23:27:07.272259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.565 [2024-07-26 23:27:07.272311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.565 [2024-07-26 23:27:07.272322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.565 [2024-07-26 23:27:07.272334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.565 [2024-07-26 23:27:07.272346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.391181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.391231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.824 [2024-07-26 23:27:07.391250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.391261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.431461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.431499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.824 [2024-07-26 23:27:07.431516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.431526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.431629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.431640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.824 [2024-07-26 23:27:07.431653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.431663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.431760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.431771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.824 [2024-07-26 23:27:07.431783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.431792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.431944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.431958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.824 [2024-07-26 23:27:07.431995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.432004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.432113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.432126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:15.824 [2024-07-26 23:27:07.432139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.432149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.432229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.432241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.824 [2024-07-26 23:27:07.432276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.824 [2024-07-26 23:27:07.432286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.824 [2024-07-26 23:27:07.432374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.824 [2024-07-26 23:27:07.432385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.824 [2024-07-26 23:27:07.432397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.825 [2024-07-26 23:27:07.432406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.825 [2024-07-26 23:27:07.432685] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 452.491 ms, result 0 00:19:15.825 true 00:19:15.825 23:27:07 -- ftl/trim.sh@63 -- # killprocess 73428 00:19:15.825 23:27:07 -- common/autotest_common.sh@926 -- # '[' -z 73428 ']' 00:19:15.825 23:27:07 -- common/autotest_common.sh@930 -- # kill -0 73428 00:19:15.825 23:27:07 -- common/autotest_common.sh@931 -- # uname 00:19:15.825 23:27:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.825 23:27:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73428 00:19:15.825 killing process with pid 73428 00:19:15.825 23:27:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:15.825 23:27:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:15.825 23:27:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73428' 00:19:15.825 23:27:07 -- common/autotest_common.sh@945 -- # kill 73428 00:19:15.825 23:27:07 -- common/autotest_common.sh@950 -- # wait 73428 00:19:21.098 23:27:12 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:21.358 65536+0 records in 00:19:21.358 65536+0 records out 00:19:21.358 268435456 bytes (268 MB, 256 MiB) copied, 0.9496 s, 283 MB/s 00:19:21.358 23:27:12 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.358 [2024-07-26 23:27:13.063530] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:21.358 [2024-07-26 23:27:13.063632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73653 ] 00:19:21.616 [2024-07-26 23:27:13.231029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.874 [2024-07-26 23:27:13.444373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.132 [2024-07-26 23:27:13.847847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.132 [2024-07-26 23:27:13.847913] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.392 [2024-07-26 23:27:14.004018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.004063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.392 [2024-07-26 23:27:14.004078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:22.392 [2024-07-26 23:27:14.004093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.007157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.007195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.392 [2024-07-26 23:27:14.007207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:19:22.392 [2024-07-26 23:27:14.007220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.007305] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.392 [2024-07-26 23:27:14.008401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.392 [2024-07-26 23:27:14.008435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.008450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.392 [2024-07-26 23:27:14.008460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:19:22.392 [2024-07-26 23:27:14.008469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.009922] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.392 [2024-07-26 23:27:14.028615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.028665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.392 [2024-07-26 23:27:14.028679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.724 ms 00:19:22.392 [2024-07-26 23:27:14.028689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.028781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.028794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.392 [2024-07-26 23:27:14.028808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:22.392 [2024-07-26 23:27:14.028817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.035537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.035564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.392 [2024-07-26 23:27:14.035575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.691 ms 00:19:22.392 [2024-07-26 23:27:14.035584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.035682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.035699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.392 [2024-07-26 23:27:14.035709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:22.392 [2024-07-26 23:27:14.035719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.035745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.035756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.392 [2024-07-26 23:27:14.035765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:22.392 [2024-07-26 23:27:14.035774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.035798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.392 [2024-07-26 23:27:14.041141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.041172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.392 [2024-07-26 23:27:14.041184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.359 ms 00:19:22.392 [2024-07-26 23:27:14.041193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.041257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.392 [2024-07-26 23:27:14.041271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.392 [2024-07-26 23:27:14.041282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.392 [2024-07-26 23:27:14.041291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.392 [2024-07-26 23:27:14.041310] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.392 [2024-07-26 23:27:14.041331] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:22.392 [2024-07-26 23:27:14.041363] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.392 [2024-07-26 23:27:14.041380] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:22.392 [2024-07-26 23:27:14.041443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:22.392 [2024-07-26 23:27:14.041456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.392 [2024-07-26 23:27:14.041469] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:22.392 [2024-07-26 23:27:14.041481] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.392 [2024-07-26 23:27:14.041492] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.393 [2024-07-26 23:27:14.041503] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.393 [2024-07-26 23:27:14.041513] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.393 [2024-07-26 23:27:14.041522] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:22.393 [2024-07-26 23:27:14.041533] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:22.393 [2024-07-26 23:27:14.041544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.041557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.393 [2024-07-26 23:27:14.041568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:19:22.393 [2024-07-26 23:27:14.041577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.393 [2024-07-26 23:27:14.041633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.041643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.393 [2024-07-26 23:27:14.041653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:22.393 [2024-07-26 23:27:14.041662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.393 [2024-07-26 23:27:14.041724] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.393 [2024-07-26 23:27:14.041735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.393 [2024-07-26 23:27:14.041745] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.393 [2024-07-26 23:27:14.041758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.393 [2024-07-26 23:27:14.041777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.393 [2024-07-26 23:27:14.041796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.393 [2024-07-26 23:27:14.041805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.393 [2024-07-26 23:27:14.041822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.393 [2024-07-26 23:27:14.041831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.393 [2024-07-26 23:27:14.041839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.393 [2024-07-26 23:27:14.041848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.393 [2024-07-26 23:27:14.041858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:22.393 [2024-07-26 23:27:14.041866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.393 [2024-07-26 23:27:14.041883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:22.393 [2024-07-26 23:27:14.041891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041909] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:22.393 [2024-07-26 23:27:14.041918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:22.393 [2024-07-26 23:27:14.041927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:22.393 [2024-07-26 23:27:14.041936] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.393 [2024-07-26 23:27:14.041944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.393 [2024-07-26 23:27:14.041961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.393 [2024-07-26 23:27:14.041989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:22.393 [2024-07-26 23:27:14.041997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.393 [2024-07-26 23:27:14.042006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.393 [2024-07-26 23:27:14.042015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.393 [2024-07-26 23:27:14.042023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.393 [2024-07-26 23:27:14.042031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.393 [2024-07-26 23:27:14.042040] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:22.393 [2024-07-26 23:27:14.042049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:22.393 [2024-07-26 23:27:14.042058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.393 [2024-07-26 23:27:14.042066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.393 [2024-07-26 23:27:14.042074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.393 [2024-07-26 23:27:14.042102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.393 [2024-07-26 23:27:14.042112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:22.393 [2024-07-26 23:27:14.042120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.393 [2024-07-26 23:27:14.042130] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.393 [2024-07-26 23:27:14.042139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.393 [2024-07-26 23:27:14.042148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.393 [2024-07-26 23:27:14.042159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.393 [2024-07-26 23:27:14.042168] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.393 [2024-07-26 23:27:14.042178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.393 [2024-07-26 23:27:14.042187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.393 [2024-07-26 23:27:14.042195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.393 [2024-07-26 23:27:14.042204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.393 [2024-07-26 23:27:14.042213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.393 [2024-07-26 23:27:14.042222] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.393 [2024-07-26 23:27:14.042238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.393 [2024-07-26 23:27:14.042249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.393 [2024-07-26 23:27:14.042271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:22.393 [2024-07-26 23:27:14.042281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.393 [2024-07-26 23:27:14.042291] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:22.393 [2024-07-26 23:27:14.042301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:22.393 [2024-07-26 23:27:14.042311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:22.393 [2024-07-26 23:27:14.042320] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:22.393 [2024-07-26 23:27:14.042330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:22.393 [2024-07-26 23:27:14.042340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:22.393 [2024-07-26 23:27:14.042350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:22.393 [2024-07-26 23:27:14.042360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:22.393 [2024-07-26 23:27:14.042370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:22.393 [2024-07-26 23:27:14.042380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:22.393 [2024-07-26 23:27:14.042389] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.393 [2024-07-26 23:27:14.042399] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.393 [2024-07-26 23:27:14.042409] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.393 [2024-07-26 23:27:14.042418] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.393 [2024-07-26 23:27:14.042428] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.393 [2024-07-26 23:27:14.042437] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.393 [2024-07-26 23:27:14.042448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.042463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.393 [2024-07-26 23:27:14.042472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:19:22.393 [2024-07-26 23:27:14.042481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.393 [2024-07-26 23:27:14.066150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.066187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.393 [2024-07-26 23:27:14.066199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.664 ms 00:19:22.393 [2024-07-26 23:27:14.066209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.393 [2024-07-26 23:27:14.066314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.066326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.393 [2024-07-26 23:27:14.066336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:22.393 [2024-07-26 23:27:14.066345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.393 [2024-07-26 23:27:14.145048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.393 [2024-07-26 23:27:14.145085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.393 [2024-07-26 23:27:14.145098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.808 ms 00:19:22.393 [2024-07-26 23:27:14.145109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.394 [2024-07-26 23:27:14.145181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.394 [2024-07-26 23:27:14.145194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.394 [2024-07-26 23:27:14.145205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.394 [2024-07-26 23:27:14.145215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.145654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.145677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.653 [2024-07-26 23:27:14.145687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:22.653 [2024-07-26 23:27:14.145697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.145804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.145818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.653 [2024-07-26 23:27:14.145829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:22.653 [2024-07-26 23:27:14.145838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.169338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.169373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.653 [2024-07-26 23:27:14.169385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.518 ms 00:19:22.653 [2024-07-26 23:27:14.169394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.188381] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:22.653 [2024-07-26 23:27:14.188421] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:22.653 [2024-07-26 23:27:14.188438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.188449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:22.653 [2024-07-26 23:27:14.188460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.973 ms 00:19:22.653 [2024-07-26 23:27:14.188469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.216863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.216900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:22.653 [2024-07-26 23:27:14.216913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.367 ms 00:19:22.653 [2024-07-26 23:27:14.216923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.234421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.234456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:22.653 [2024-07-26 23:27:14.234468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.432 ms 00:19:22.653 [2024-07-26 23:27:14.234477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.251575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.251608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:22.653 [2024-07-26 23:27:14.251631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.057 ms 00:19:22.653 [2024-07-26 23:27:14.251640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.252089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.252106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:22.653 [2024-07-26 23:27:14.252118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:19:22.653 [2024-07-26 23:27:14.252128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.338117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.338157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:22.653 [2024-07-26 23:27:14.338171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.104 ms 00:19:22.653 [2024-07-26 23:27:14.338181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.349488] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:22.653 [2024-07-26 23:27:14.364796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.364843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:22.653 [2024-07-26 23:27:14.364857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.582 ms 00:19:22.653 [2024-07-26 23:27:14.364866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.364951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.364985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:22.653 [2024-07-26 23:27:14.364997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.653 [2024-07-26 23:27:14.365007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.365062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.365075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:22.653 [2024-07-26 23:27:14.365085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:22.653 [2024-07-26 23:27:14.365100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.367973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.368006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:22.653 [2024-07-26 23:27:14.368018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:19:22.653 [2024-07-26 23:27:14.368027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.368060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.368070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:22.653 [2024-07-26 23:27:14.368080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:22.653 [2024-07-26 23:27:14.368090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.368138] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:22.653 [2024-07-26 23:27:14.368149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.368160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:22.653 [2024-07-26 23:27:14.368169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:22.653 [2024-07-26 23:27:14.368178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.402032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.402070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:22.653 [2024-07-26 23:27:14.402083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.885 ms 00:19:22.653 [2024-07-26 23:27:14.402103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.402205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.653 [2024-07-26 23:27:14.402217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:22.653 [2024-07-26 23:27:14.402228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:22.653 [2024-07-26 23:27:14.402237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.653 [2024-07-26 23:27:14.403198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.912 [2024-07-26 23:27:14.408108] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.483 ms, result 0 00:19:22.912 [2024-07-26 23:27:14.408878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.912 [2024-07-26 23:27:14.426242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.600  Copying: 21/256 [MB] (21 MBps) Copying: 43/256 [MB] (21 MBps) Copying: 64/256 [MB] (21 MBps) Copying: 86/256 [MB] (21 MBps) Copying: 108/256 [MB] (22 MBps) Copying: 130/256 [MB] (21 MBps) Copying: 152/256 [MB] (22 MBps) Copying: 173/256 [MB] (21 MBps) Copying: 196/256 [MB] (22 MBps) Copying: 218/256 [MB] (22 MBps) Copying: 240/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-26 23:27:26.084019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.600 [2024-07-26 23:27:26.097386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.600 [2024-07-26 23:27:26.097425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.600 [2024-07-26 23:27:26.097439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:34.600 [2024-07-26 23:27:26.097448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.600 [2024-07-26 23:27:26.097481] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.600 [2024-07-26 23:27:26.100943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.600 [2024-07-26 23:27:26.100983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.600 [2024-07-26 23:27:26.100995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:19:34.600 [2024-07-26 23:27:26.101004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.600 [2024-07-26 23:27:26.102998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.600 [2024-07-26 23:27:26.103033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.600 [2024-07-26 23:27:26.103045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:19:34.600 [2024-07-26 23:27:26.103055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.600 [2024-07-26 23:27:26.109194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.600 [2024-07-26 23:27:26.109229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.600 [2024-07-26 23:27:26.109251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.132 ms 00:19:34.600 [2024-07-26 23:27:26.109261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.600 [2024-07-26 23:27:26.114464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.600 [2024-07-26 23:27:26.114497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:34.601 [2024-07-26 23:27:26.114508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.164 ms 00:19:34.601 [2024-07-26 23:27:26.114517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.149865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.149899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.601 [2024-07-26 23:27:26.149912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.360 ms 00:19:34.601 [2024-07-26 23:27:26.149921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.171233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.171268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.601 [2024-07-26 23:27:26.171281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.272 ms 00:19:34.601 [2024-07-26 23:27:26.171299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.171427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.171441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.601 [2024-07-26 23:27:26.171451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:34.601 [2024-07-26 23:27:26.171460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.208078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.208112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:34.601 [2024-07-26 23:27:26.208124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.661 ms 00:19:34.601 [2024-07-26 23:27:26.208148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.244029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.244063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:34.601 [2024-07-26 23:27:26.244075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.877 ms 00:19:34.601 [2024-07-26 23:27:26.244084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.279531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.279564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.601 [2024-07-26 23:27:26.279576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.444 ms 00:19:34.601 [2024-07-26 23:27:26.279585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.313837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.601 [2024-07-26 23:27:26.313873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.601 [2024-07-26 23:27:26.313885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.220 ms 00:19:34.601 [2024-07-26 23:27:26.313895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.601 [2024-07-26 23:27:26.313957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.601 [2024-07-26 23:27:26.314000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.601 [2024-07-26 23:27:26.314663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.314996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.315006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.315016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.315026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.315036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.602 [2024-07-26 23:27:26.315051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.602 [2024-07-26 23:27:26.315062] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:34.602 [2024-07-26 23:27:26.315086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.602 [2024-07-26 23:27:26.315096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.602 [2024-07-26 23:27:26.315105] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.602 [2024-07-26 23:27:26.315126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.602 [2024-07-26 23:27:26.315146] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.602 [2024-07-26 23:27:26.315155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.602 [2024-07-26 23:27:26.315165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.602 [2024-07-26 23:27:26.315173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.602 [2024-07-26 23:27:26.315182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.602 [2024-07-26 23:27:26.315191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.602 [2024-07-26 23:27:26.315200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.602 [2024-07-26 23:27:26.315217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:19:34.602 [2024-07-26 23:27:26.315226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.602 [2024-07-26 23:27:26.333643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.602 [2024-07-26 23:27:26.333675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.602 [2024-07-26 23:27:26.333687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:19:34.602 [2024-07-26 23:27:26.333696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.602 [2024-07-26 23:27:26.333986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.602 [2024-07-26 23:27:26.334000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.602 [2024-07-26 23:27:26.334011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:34.602 [2024-07-26 23:27:26.334020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.390221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.390256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.879 [2024-07-26 23:27:26.390268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.879 [2024-07-26 23:27:26.390278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.390362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.390373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.879 [2024-07-26 23:27:26.390385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.879 [2024-07-26 23:27:26.390394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.390440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.390453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.879 [2024-07-26 23:27:26.390462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.879 [2024-07-26 23:27:26.390472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.390489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.390499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.879 [2024-07-26 23:27:26.390517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.879 [2024-07-26 23:27:26.390526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.498450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.498501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.879 [2024-07-26 23:27:26.498515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.879 [2024-07-26 23:27:26.498526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.879 [2024-07-26 23:27:26.541393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.879 [2024-07-26 23:27:26.541443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.879 [2024-07-26 23:27:26.541456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.541548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.880 [2024-07-26 23:27:26.541559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.541610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.880 [2024-07-26 23:27:26.541620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.541756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.880 [2024-07-26 23:27:26.541767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.541827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.880 [2024-07-26 23:27:26.541836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.541906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.880 [2024-07-26 23:27:26.541916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.541926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.541991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.880 [2024-07-26 23:27:26.542005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.880 [2024-07-26 23:27:26.542015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.880 [2024-07-26 23:27:26.542032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.880 [2024-07-26 23:27:26.542184] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.525 ms, result 0 00:19:36.295 00:19:36.295 00:19:36.295 23:27:27 -- ftl/trim.sh@72 -- # svcpid=73809 00:19:36.295 23:27:27 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:36.295 23:27:27 -- ftl/trim.sh@73 -- # waitforlisten 73809 00:19:36.295 23:27:27 -- common/autotest_common.sh@819 -- # '[' -z 73809 ']' 00:19:36.295 23:27:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.295 23:27:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.295 23:27:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.295 23:27:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.295 23:27:27 -- common/autotest_common.sh@10 -- # set +x 00:19:36.295 [2024-07-26 23:27:27.998798] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:36.296 [2024-07-26 23:27:27.998900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73809 ] 00:19:36.554 [2024-07-26 23:27:28.165053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.813 [2024-07-26 23:27:28.369775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:36.813 [2024-07-26 23:27:28.369960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.749 23:27:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.749 23:27:29 -- common/autotest_common.sh@852 -- # return 0 00:19:37.749 23:27:29 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:38.008 [2024-07-26 23:27:29.624457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.008 [2024-07-26 23:27:29.624510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.268 [2024-07-26 23:27:29.804145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.804189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.268 [2024-07-26 23:27:29.804207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:38.268 [2024-07-26 23:27:29.804217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.807799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.807835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.268 [2024-07-26 23:27:29.807849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.567 ms 00:19:38.268 [2024-07-26 23:27:29.807858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.807978] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.268 [2024-07-26 23:27:29.809100] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.268 [2024-07-26 23:27:29.809136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.809147] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.268 [2024-07-26 23:27:29.809161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:19:38.268 [2024-07-26 23:27:29.809171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.810638] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.268 [2024-07-26 23:27:29.828650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.828695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.268 [2024-07-26 23:27:29.828708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.049 ms 00:19:38.268 [2024-07-26 23:27:29.828720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.828805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.828821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.268 [2024-07-26 23:27:29.828833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:38.268 [2024-07-26 23:27:29.828845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.835554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.835586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.268 [2024-07-26 23:27:29.835597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:19:38.268 [2024-07-26 23:27:29.835611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.835691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.835707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.268 [2024-07-26 23:27:29.835719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:38.268 [2024-07-26 23:27:29.835730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.835754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.835773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.268 [2024-07-26 23:27:29.835782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.268 [2024-07-26 23:27:29.835794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.835820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.268 [2024-07-26 23:27:29.841171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.841202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.268 [2024-07-26 23:27:29.841216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.364 ms 00:19:38.268 [2024-07-26 23:27:29.841226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.841290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.841302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.268 [2024-07-26 23:27:29.841314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.268 [2024-07-26 23:27:29.841323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.841347] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.268 [2024-07-26 23:27:29.841378] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:38.268 [2024-07-26 23:27:29.841415] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.268 [2024-07-26 23:27:29.841432] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:38.268 [2024-07-26 23:27:29.841497] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:38.268 [2024-07-26 23:27:29.841510] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.268 [2024-07-26 23:27:29.841526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:38.268 [2024-07-26 23:27:29.841539] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.268 [2024-07-26 23:27:29.841561] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.268 [2024-07-26 23:27:29.841572] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.268 [2024-07-26 23:27:29.841586] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.268 [2024-07-26 23:27:29.841596] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:38.268 [2024-07-26 23:27:29.841615] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:38.268 [2024-07-26 23:27:29.841625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.841641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.268 [2024-07-26 23:27:29.841651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:38.268 [2024-07-26 23:27:29.841665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.841720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.268 [2024-07-26 23:27:29.841736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.268 [2024-07-26 23:27:29.841750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:38.268 [2024-07-26 23:27:29.841764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.268 [2024-07-26 23:27:29.841828] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.268 [2024-07-26 23:27:29.841845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.268 [2024-07-26 23:27:29.841855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.268 [2024-07-26 23:27:29.841869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.268 [2024-07-26 23:27:29.841881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.268 [2024-07-26 23:27:29.841896] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.268 [2024-07-26 23:27:29.841906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.268 [2024-07-26 23:27:29.841924] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.268 [2024-07-26 23:27:29.841933] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.268 [2024-07-26 23:27:29.841946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.268 [2024-07-26 23:27:29.841955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.268 [2024-07-26 23:27:29.841980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.268 [2024-07-26 23:27:29.841989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.268 [2024-07-26 23:27:29.842003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.268 [2024-07-26 23:27:29.842012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:38.268 [2024-07-26 23:27:29.842026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.268 [2024-07-26 23:27:29.842035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.268 [2024-07-26 23:27:29.842049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:38.268 [2024-07-26 23:27:29.842058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.268 [2024-07-26 23:27:29.842072] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:38.268 [2024-07-26 23:27:29.842081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:38.268 [2024-07-26 23:27:29.842094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:38.268 [2024-07-26 23:27:29.842103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.268 [2024-07-26 23:27:29.842121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.268 [2024-07-26 23:27:29.842131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.268 [2024-07-26 23:27:29.842143] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.268 [2024-07-26 23:27:29.842152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:38.268 [2024-07-26 23:27:29.842165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.268 [2024-07-26 23:27:29.842173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.268 [2024-07-26 23:27:29.842188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.269 [2024-07-26 23:27:29.842209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.269 [2024-07-26 23:27:29.842223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.269 [2024-07-26 23:27:29.842232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:38.269 [2024-07-26 23:27:29.842244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.269 [2024-07-26 23:27:29.842253] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.269 [2024-07-26 23:27:29.842266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.269 [2024-07-26 23:27:29.842275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.269 [2024-07-26 23:27:29.842288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.269 [2024-07-26 23:27:29.842297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:38.269 [2024-07-26 23:27:29.842316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.269 [2024-07-26 23:27:29.842324] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.269 [2024-07-26 23:27:29.842338] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.269 [2024-07-26 23:27:29.842349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.269 [2024-07-26 23:27:29.842368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.269 [2024-07-26 23:27:29.842378] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.269 [2024-07-26 23:27:29.842391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.269 [2024-07-26 23:27:29.842400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.269 [2024-07-26 23:27:29.842413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.269 [2024-07-26 23:27:29.842422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.269 [2024-07-26 23:27:29.842435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.269 [2024-07-26 23:27:29.842445] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.269 [2024-07-26 23:27:29.842462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.269 [2024-07-26 23:27:29.842472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.269 [2024-07-26 23:27:29.842487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:38.269 [2024-07-26 23:27:29.842496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.269 [2024-07-26 23:27:29.842518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:38.269 [2024-07-26 23:27:29.842528] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:38.269 [2024-07-26 23:27:29.842543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:38.269 [2024-07-26 23:27:29.842553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:38.269 [2024-07-26 23:27:29.842568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:38.269 [2024-07-26 23:27:29.842577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:38.269 [2024-07-26 23:27:29.842591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:38.269 [2024-07-26 23:27:29.842601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:38.269 [2024-07-26 23:27:29.842615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:38.269 [2024-07-26 23:27:29.842625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:38.269 [2024-07-26 23:27:29.842640] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.269 [2024-07-26 23:27:29.842650] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.269 [2024-07-26 23:27:29.842666] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.269 [2024-07-26 23:27:29.842676] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.269 [2024-07-26 23:27:29.842690] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.269 [2024-07-26 23:27:29.842700] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.269 [2024-07-26 23:27:29.842718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.842729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.269 [2024-07-26 23:27:29.842743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:19:38.269 [2024-07-26 23:27:29.842753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.867102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.867134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.269 [2024-07-26 23:27:29.867148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.339 ms 00:19:38.269 [2024-07-26 23:27:29.867158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.867264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.867279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.269 [2024-07-26 23:27:29.867290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:38.269 [2024-07-26 23:27:29.867300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.917243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.917276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.269 [2024-07-26 23:27:29.917293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.999 ms 00:19:38.269 [2024-07-26 23:27:29.917303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.917371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.917382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.269 [2024-07-26 23:27:29.917398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.269 [2024-07-26 23:27:29.917414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.917853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.917866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.269 [2024-07-26 23:27:29.917884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:38.269 [2024-07-26 23:27:29.917894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.918017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.918032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.269 [2024-07-26 23:27:29.918046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:38.269 [2024-07-26 23:27:29.918056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.941409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.941442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.269 [2024-07-26 23:27:29.941458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.359 ms 00:19:38.269 [2024-07-26 23:27:29.941469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.959617] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.269 [2024-07-26 23:27:29.959654] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.269 [2024-07-26 23:27:29.959671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.959683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.269 [2024-07-26 23:27:29.959698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.116 ms 00:19:38.269 [2024-07-26 23:27:29.959707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:29.988979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:29.989016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.269 [2024-07-26 23:27:29.989038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.232 ms 00:19:38.269 [2024-07-26 23:27:29.989048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.269 [2024-07-26 23:27:30.007560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.269 [2024-07-26 23:27:30.007595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.269 [2024-07-26 23:27:30.007613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.478 ms 00:19:38.269 [2024-07-26 23:27:30.007623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.026022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.026060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.528 [2024-07-26 23:27:30.026084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.349 ms 00:19:38.528 [2024-07-26 23:27:30.026094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.026618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.026636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.528 [2024-07-26 23:27:30.026652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:19:38.528 [2024-07-26 23:27:30.026664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.119223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.119263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.528 [2024-07-26 23:27:30.119282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.668 ms 00:19:38.528 [2024-07-26 23:27:30.119297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.130448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.528 [2024-07-26 23:27:30.145398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.145444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.528 [2024-07-26 23:27:30.145458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:19:38.528 [2024-07-26 23:27:30.145472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.145552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.145574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.528 [2024-07-26 23:27:30.145587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:38.528 [2024-07-26 23:27:30.145600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.528 [2024-07-26 23:27:30.145651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.528 [2024-07-26 23:27:30.145668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.529 [2024-07-26 23:27:30.145678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:38.529 [2024-07-26 23:27:30.145692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.148685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.529 [2024-07-26 23:27:30.148725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:38.529 [2024-07-26 23:27:30.148736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:19:38.529 [2024-07-26 23:27:30.148766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.148797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.529 [2024-07-26 23:27:30.148818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.529 [2024-07-26 23:27:30.148829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.529 [2024-07-26 23:27:30.148849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.148896] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.529 [2024-07-26 23:27:30.148919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.529 [2024-07-26 23:27:30.148929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.529 [2024-07-26 23:27:30.148944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:38.529 [2024-07-26 23:27:30.148954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.182846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.529 [2024-07-26 23:27:30.182887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.529 [2024-07-26 23:27:30.182904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.895 ms 00:19:38.529 [2024-07-26 23:27:30.182915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.183040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.529 [2024-07-26 23:27:30.183055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.529 [2024-07-26 23:27:30.183070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:38.529 [2024-07-26 23:27:30.183080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.529 [2024-07-26 23:27:30.184006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.529 [2024-07-26 23:27:30.188784] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.122 ms, result 0 00:19:38.529 [2024-07-26 23:27:30.189907] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.529 Some configs were skipped because the RPC state that can call them passed over. 00:19:38.529 23:27:30 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:38.787 [2024-07-26 23:27:30.439844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.787 [2024-07-26 23:27:30.439898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:38.787 [2024-07-26 23:27:30.439912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.166 ms 00:19:38.787 [2024-07-26 23:27:30.439927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.787 [2024-07-26 23:27:30.439986] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 37.292 ms, result 0 00:19:38.787 true 00:19:38.787 23:27:30 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:39.046 [2024-07-26 23:27:30.670812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.046 [2024-07-26 23:27:30.670849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:39.046 [2024-07-26 23:27:30.670866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.040 ms 00:19:39.046 [2024-07-26 23:27:30.670876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.046 [2024-07-26 23:27:30.670918] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 35.145 ms, result 0 00:19:39.046 true 00:19:39.046 23:27:30 -- ftl/trim.sh@81 -- # killprocess 73809 00:19:39.046 23:27:30 -- common/autotest_common.sh@926 -- # '[' -z 73809 ']' 00:19:39.046 23:27:30 -- common/autotest_common.sh@930 -- # kill -0 73809 00:19:39.046 23:27:30 -- common/autotest_common.sh@931 -- # uname 00:19:39.046 23:27:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.046 23:27:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73809 00:19:39.046 killing process with pid 73809 00:19:39.046 23:27:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:39.046 23:27:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:39.046 23:27:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73809' 00:19:39.046 23:27:30 -- common/autotest_common.sh@945 -- # kill 73809 00:19:39.046 23:27:30 -- common/autotest_common.sh@950 -- # wait 73809 00:19:40.424 [2024-07-26 23:27:31.753378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.424 [2024-07-26 23:27:31.753440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.424 [2024-07-26 23:27:31.753455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.424 [2024-07-26 23:27:31.753467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.424 [2024-07-26 23:27:31.753503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:40.424 [2024-07-26 23:27:31.756554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.424 [2024-07-26 23:27:31.756590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.424 [2024-07-26 23:27:31.756606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:19:40.424 [2024-07-26 23:27:31.756616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.424 [2024-07-26 23:27:31.756841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.424 [2024-07-26 23:27:31.756854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.424 [2024-07-26 23:27:31.756865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:40.425 [2024-07-26 23:27:31.756875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.760066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.760103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.425 [2024-07-26 23:27:31.760116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:19:40.425 [2024-07-26 23:27:31.760129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.765414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.765447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:40.425 [2024-07-26 23:27:31.765461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:19:40.425 [2024-07-26 23:27:31.765470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.779294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.779328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.425 [2024-07-26 23:27:31.779354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.789 ms 00:19:40.425 [2024-07-26 23:27:31.779364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.790229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.790265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.425 [2024-07-26 23:27:31.790282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.841 ms 00:19:40.425 [2024-07-26 23:27:31.790291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.790395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.790407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.425 [2024-07-26 23:27:31.790419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:40.425 [2024-07-26 23:27:31.790429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.805339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.805371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:40.425 [2024-07-26 23:27:31.805385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.911 ms 00:19:40.425 [2024-07-26 23:27:31.805394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.819902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.819940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:40.425 [2024-07-26 23:27:31.819959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.478 ms 00:19:40.425 [2024-07-26 23:27:31.819975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.833630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.833663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.425 [2024-07-26 23:27:31.833676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.637 ms 00:19:40.425 [2024-07-26 23:27:31.833685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.847872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.425 [2024-07-26 23:27:31.847903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.425 [2024-07-26 23:27:31.847917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.148 ms 00:19:40.425 [2024-07-26 23:27:31.847926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.425 [2024-07-26 23:27:31.847978] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.425 [2024-07-26 23:27:31.847996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.425 [2024-07-26 23:27:31.848561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.848998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.426 [2024-07-26 23:27:31.849138] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.426 [2024-07-26 23:27:31.849162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:40.426 [2024-07-26 23:27:31.849175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.426 [2024-07-26 23:27:31.849186] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.426 [2024-07-26 23:27:31.849195] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.426 [2024-07-26 23:27:31.849207] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.426 [2024-07-26 23:27:31.849217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.426 [2024-07-26 23:27:31.849229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.426 [2024-07-26 23:27:31.849238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.426 [2024-07-26 23:27:31.849249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.426 [2024-07-26 23:27:31.849257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.426 [2024-07-26 23:27:31.849268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.426 [2024-07-26 23:27:31.849277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.426 [2024-07-26 23:27:31.849290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:19:40.426 [2024-07-26 23:27:31.849298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.867489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.426 [2024-07-26 23:27:31.867521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.426 [2024-07-26 23:27:31.867538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.188 ms 00:19:40.426 [2024-07-26 23:27:31.867547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.867818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.426 [2024-07-26 23:27:31.867831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.426 [2024-07-26 23:27:31.867842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:19:40.426 [2024-07-26 23:27:31.867852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.932503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:31.932537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.426 [2024-07-26 23:27:31.932551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:31.932561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.932637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:31.932650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.426 [2024-07-26 23:27:31.932663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:31.932673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.932721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:31.932732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.426 [2024-07-26 23:27:31.932747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:31.932756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:31.932778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:31.932789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.426 [2024-07-26 23:27:31.932801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:31.932810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:32.048019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:32.048065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.426 [2024-07-26 23:27:32.048080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:32.048090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.426 [2024-07-26 23:27:32.091145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.426 [2024-07-26 23:27:32.091181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.426 [2024-07-26 23:27:32.091196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.426 [2024-07-26 23:27:32.091205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.427 [2024-07-26 23:27:32.091311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.427 [2024-07-26 23:27:32.091384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.427 [2024-07-26 23:27:32.091525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.427 [2024-07-26 23:27:32.091602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.427 [2024-07-26 23:27:32.091688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.427 [2024-07-26 23:27:32.091759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.427 [2024-07-26 23:27:32.091773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.427 [2024-07-26 23:27:32.091783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.427 [2024-07-26 23:27:32.091926] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.068 ms, result 0 00:19:41.803 23:27:33 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:41.803 23:27:33 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.803 [2024-07-26 23:27:33.393142] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:41.803 [2024-07-26 23:27:33.393261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73875 ] 00:19:42.061 [2024-07-26 23:27:33.563919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.061 [2024-07-26 23:27:33.776543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.630 [2024-07-26 23:27:34.178497] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.630 [2024-07-26 23:27:34.178559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.630 [2024-07-26 23:27:34.332907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.332951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.630 [2024-07-26 23:27:34.332973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:42.630 [2024-07-26 23:27:34.332987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.335902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.335944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.630 [2024-07-26 23:27:34.335956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:19:42.630 [2024-07-26 23:27:34.335980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.336065] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.630 [2024-07-26 23:27:34.337146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.630 [2024-07-26 23:27:34.337176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.337190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.630 [2024-07-26 23:27:34.337200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:19:42.630 [2024-07-26 23:27:34.337209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.338628] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:42.630 [2024-07-26 23:27:34.357813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.357848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:42.630 [2024-07-26 23:27:34.357861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.217 ms 00:19:42.630 [2024-07-26 23:27:34.357871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.357961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.357994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:42.630 [2024-07-26 23:27:34.358007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:42.630 [2024-07-26 23:27:34.358017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.364718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.364742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.630 [2024-07-26 23:27:34.364753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:19:42.630 [2024-07-26 23:27:34.364763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.364860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.364877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.630 [2024-07-26 23:27:34.364887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:42.630 [2024-07-26 23:27:34.364897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.364923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.364934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:42.630 [2024-07-26 23:27:34.364944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:42.630 [2024-07-26 23:27:34.364953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.364989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:42.630 [2024-07-26 23:27:34.370427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.370456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.630 [2024-07-26 23:27:34.370468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.455 ms 00:19:42.630 [2024-07-26 23:27:34.370478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.370540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.370555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:42.630 [2024-07-26 23:27:34.370565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:42.630 [2024-07-26 23:27:34.370574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.370593] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:42.630 [2024-07-26 23:27:34.370615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:42.630 [2024-07-26 23:27:34.370646] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:42.630 [2024-07-26 23:27:34.370662] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:42.630 [2024-07-26 23:27:34.370725] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:42.630 [2024-07-26 23:27:34.370737] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:42.630 [2024-07-26 23:27:34.370749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:42.630 [2024-07-26 23:27:34.370761] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:42.630 [2024-07-26 23:27:34.370772] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:42.630 [2024-07-26 23:27:34.370782] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:42.630 [2024-07-26 23:27:34.370791] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:42.630 [2024-07-26 23:27:34.370800] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:42.630 [2024-07-26 23:27:34.370810] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:42.630 [2024-07-26 23:27:34.370820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.370833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:42.630 [2024-07-26 23:27:34.370842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:42.630 [2024-07-26 23:27:34.370852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.630 [2024-07-26 23:27:34.370907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.630 [2024-07-26 23:27:34.370917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:42.631 [2024-07-26 23:27:34.370927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:42.631 [2024-07-26 23:27:34.370936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.631 [2024-07-26 23:27:34.371007] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:42.631 [2024-07-26 23:27:34.371019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:42.631 [2024-07-26 23:27:34.371029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:42.631 [2024-07-26 23:27:34.371060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:42.631 [2024-07-26 23:27:34.371086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.631 [2024-07-26 23:27:34.371103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:42.631 [2024-07-26 23:27:34.371112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:42.631 [2024-07-26 23:27:34.371120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.631 [2024-07-26 23:27:34.371128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:42.631 [2024-07-26 23:27:34.371137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:42.631 [2024-07-26 23:27:34.371146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:42.631 [2024-07-26 23:27:34.371163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:42.631 [2024-07-26 23:27:34.371171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:42.631 [2024-07-26 23:27:34.371199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:42.631 [2024-07-26 23:27:34.371207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371216] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:42.631 [2024-07-26 23:27:34.371226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:42.631 [2024-07-26 23:27:34.371251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:42.631 [2024-07-26 23:27:34.371276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:42.631 [2024-07-26 23:27:34.371300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:42.631 [2024-07-26 23:27:34.371324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.631 [2024-07-26 23:27:34.371343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:42.631 [2024-07-26 23:27:34.371351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:42.631 [2024-07-26 23:27:34.371359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.631 [2024-07-26 23:27:34.371367] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:42.631 [2024-07-26 23:27:34.371376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:42.631 [2024-07-26 23:27:34.371385] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.631 [2024-07-26 23:27:34.371404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:42.631 [2024-07-26 23:27:34.371412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:42.631 [2024-07-26 23:27:34.371421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:42.631 [2024-07-26 23:27:34.371429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:42.631 [2024-07-26 23:27:34.371438] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:42.631 [2024-07-26 23:27:34.371447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:42.631 [2024-07-26 23:27:34.371456] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:42.631 [2024-07-26 23:27:34.371471] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.631 [2024-07-26 23:27:34.371482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:42.631 [2024-07-26 23:27:34.371492] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:42.631 [2024-07-26 23:27:34.371501] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:42.631 [2024-07-26 23:27:34.371511] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:42.631 [2024-07-26 23:27:34.371520] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:42.631 [2024-07-26 23:27:34.371529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:42.631 [2024-07-26 23:27:34.371539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:42.631 [2024-07-26 23:27:34.371548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:42.631 [2024-07-26 23:27:34.371557] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:42.631 [2024-07-26 23:27:34.371567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:42.631 [2024-07-26 23:27:34.371577] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:42.631 [2024-07-26 23:27:34.371586] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:42.631 [2024-07-26 23:27:34.371597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:42.631 [2024-07-26 23:27:34.371606] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:42.631 [2024-07-26 23:27:34.371617] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.631 [2024-07-26 23:27:34.371627] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:42.631 [2024-07-26 23:27:34.371638] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:42.631 [2024-07-26 23:27:34.371648] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:42.631 [2024-07-26 23:27:34.371657] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:42.631 [2024-07-26 23:27:34.371667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.631 [2024-07-26 23:27:34.371681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:42.631 [2024-07-26 23:27:34.371690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:19:42.631 [2024-07-26 23:27:34.371699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.396616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.396648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.891 [2024-07-26 23:27:34.396660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.914 ms 00:19:42.891 [2024-07-26 23:27:34.396670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.396774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.396786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.891 [2024-07-26 23:27:34.396797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:42.891 [2024-07-26 23:27:34.396807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.474931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.474971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.891 [2024-07-26 23:27:34.474984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.229 ms 00:19:42.891 [2024-07-26 23:27:34.474995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.475067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.475080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.891 [2024-07-26 23:27:34.475090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:42.891 [2024-07-26 23:27:34.475100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.475541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.475561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.891 [2024-07-26 23:27:34.475571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:42.891 [2024-07-26 23:27:34.475581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.475690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.475702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.891 [2024-07-26 23:27:34.475712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:42.891 [2024-07-26 23:27:34.475721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.498703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.498736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.891 [2024-07-26 23:27:34.498748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.998 ms 00:19:42.891 [2024-07-26 23:27:34.498758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.517930] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:42.891 [2024-07-26 23:27:34.517991] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:42.891 [2024-07-26 23:27:34.518006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.518016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:42.891 [2024-07-26 23:27:34.518027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.172 ms 00:19:42.891 [2024-07-26 23:27:34.518037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.546986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.547020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:42.891 [2024-07-26 23:27:34.547033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.921 ms 00:19:42.891 [2024-07-26 23:27:34.547049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.565292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.565324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:42.891 [2024-07-26 23:27:34.565336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.200 ms 00:19:42.891 [2024-07-26 23:27:34.565346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.583809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.583851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:42.891 [2024-07-26 23:27:34.583862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.425 ms 00:19:42.891 [2024-07-26 23:27:34.583871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.891 [2024-07-26 23:27:34.584308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.891 [2024-07-26 23:27:34.584326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.891 [2024-07-26 23:27:34.584336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:42.891 [2024-07-26 23:27:34.584346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.669528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.669565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.151 [2024-07-26 23:27:34.669579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.297 ms 00:19:43.151 [2024-07-26 23:27:34.669589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.680848] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:43.151 [2024-07-26 23:27:34.696070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.696103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.151 [2024-07-26 23:27:34.696116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.443 ms 00:19:43.151 [2024-07-26 23:27:34.696125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.696207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.696219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.151 [2024-07-26 23:27:34.696229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:43.151 [2024-07-26 23:27:34.696239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.696293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.696308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.151 [2024-07-26 23:27:34.696318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:43.151 [2024-07-26 23:27:34.696327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.698214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.698242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:43.151 [2024-07-26 23:27:34.698252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:19:43.151 [2024-07-26 23:27:34.698262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.698292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.698302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.151 [2024-07-26 23:27:34.698313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:43.151 [2024-07-26 23:27:34.698326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.698361] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.151 [2024-07-26 23:27:34.698372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.698382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.151 [2024-07-26 23:27:34.698392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:43.151 [2024-07-26 23:27:34.698401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.733917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.733951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.151 [2024-07-26 23:27:34.733989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.550 ms 00:19:43.151 [2024-07-26 23:27:34.733999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.734101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.151 [2024-07-26 23:27:34.734114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.151 [2024-07-26 23:27:34.734129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:43.151 [2024-07-26 23:27:34.734139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.151 [2024-07-26 23:27:34.735033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.151 [2024-07-26 23:27:34.739454] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.483 ms, result 0 00:19:43.151 [2024-07-26 23:27:34.740325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.151 [2024-07-26 23:27:34.757289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.979  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 73/256 [MB] (23 MBps) Copying: 96/256 [MB] (23 MBps) Copying: 120/256 [MB] (23 MBps) Copying: 143/256 [MB] (23 MBps) Copying: 166/256 [MB] (23 MBps) Copying: 189/256 [MB] (22 MBps) Copying: 212/256 [MB] (22 MBps) Copying: 235/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-26 23:27:45.605881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.980 [2024-07-26 23:27:45.619245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.619281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.980 [2024-07-26 23:27:45.619295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.980 [2024-07-26 23:27:45.619312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.619333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.980 [2024-07-26 23:27:45.622673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.622701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.980 [2024-07-26 23:27:45.622711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:19:53.980 [2024-07-26 23:27:45.622721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.622932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.622944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.980 [2024-07-26 23:27:45.622953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:53.980 [2024-07-26 23:27:45.622971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.625615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.625641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.980 [2024-07-26 23:27:45.625651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.632 ms 00:19:53.980 [2024-07-26 23:27:45.625660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.630864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.630894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:53.980 [2024-07-26 23:27:45.630905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:19:53.980 [2024-07-26 23:27:45.630914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.665250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.665283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.980 [2024-07-26 23:27:45.665295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.314 ms 00:19:53.980 [2024-07-26 23:27:45.665304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.686637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.686669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.980 [2024-07-26 23:27:45.686686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.306 ms 00:19:53.980 [2024-07-26 23:27:45.686696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.686827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.686840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.980 [2024-07-26 23:27:45.686850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:53.980 [2024-07-26 23:27:45.686870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.980 [2024-07-26 23:27:45.723181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.980 [2024-07-26 23:27:45.723212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:53.980 [2024-07-26 23:27:45.723234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.352 ms 00:19:53.980 [2024-07-26 23:27:45.723243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.240 [2024-07-26 23:27:45.758873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.240 [2024-07-26 23:27:45.758903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:54.240 [2024-07-26 23:27:45.758915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.626 ms 00:19:54.240 [2024-07-26 23:27:45.758924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.240 [2024-07-26 23:27:45.794448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.240 [2024-07-26 23:27:45.794478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.240 [2024-07-26 23:27:45.794489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.511 ms 00:19:54.240 [2024-07-26 23:27:45.794499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.240 [2024-07-26 23:27:45.829563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.240 [2024-07-26 23:27:45.829593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.240 [2024-07-26 23:27:45.829604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:19:54.240 [2024-07-26 23:27:45.829613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.240 [2024-07-26 23:27:45.829672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.240 [2024-07-26 23:27:45.829689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.240 [2024-07-26 23:27:45.829701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.240 [2024-07-26 23:27:45.829711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.240 [2024-07-26 23:27:45.829723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.829992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.241 [2024-07-26 23:27:45.830597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.242 [2024-07-26 23:27:45.830682] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.242 [2024-07-26 23:27:45.830702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:54.242 [2024-07-26 23:27:45.830712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.242 [2024-07-26 23:27:45.830721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.242 [2024-07-26 23:27:45.830730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.242 [2024-07-26 23:27:45.830739] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.242 [2024-07-26 23:27:45.830748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.242 [2024-07-26 23:27:45.830758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.242 [2024-07-26 23:27:45.830768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.242 [2024-07-26 23:27:45.830777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.242 [2024-07-26 23:27:45.830785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.242 [2024-07-26 23:27:45.830793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.242 [2024-07-26 23:27:45.830806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.242 [2024-07-26 23:27:45.830816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:19:54.242 [2024-07-26 23:27:45.830824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.849209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.242 [2024-07-26 23:27:45.849238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.242 [2024-07-26 23:27:45.849249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:19:54.242 [2024-07-26 23:27:45.849259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.849501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.242 [2024-07-26 23:27:45.849514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.242 [2024-07-26 23:27:45.849525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:54.242 [2024-07-26 23:27:45.849534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.903124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.242 [2024-07-26 23:27:45.903154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.242 [2024-07-26 23:27:45.903165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.242 [2024-07-26 23:27:45.903175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.903252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.242 [2024-07-26 23:27:45.903262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.242 [2024-07-26 23:27:45.903273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.242 [2024-07-26 23:27:45.903283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.903327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.242 [2024-07-26 23:27:45.903338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.242 [2024-07-26 23:27:45.903348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.242 [2024-07-26 23:27:45.903358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.242 [2024-07-26 23:27:45.903375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.242 [2024-07-26 23:27:45.903389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.242 [2024-07-26 23:27:45.903399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.242 [2024-07-26 23:27:45.903407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.007110] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.007152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.502 [2024-07-26 23:27:46.007166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.007176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.502 [2024-07-26 23:27:46.050269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.502 [2024-07-26 23:27:46.050354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.502 [2024-07-26 23:27:46.050417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.502 [2024-07-26 23:27:46.050554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.502 [2024-07-26 23:27:46.050620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.502 [2024-07-26 23:27:46.050690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.502 [2024-07-26 23:27:46.050752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.502 [2024-07-26 23:27:46.050765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.502 [2024-07-26 23:27:46.050777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.502 [2024-07-26 23:27:46.050905] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.363 ms, result 0 00:19:55.881 00:19:55.881 00:19:55.881 23:27:47 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:55.881 23:27:47 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:56.141 23:27:47 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:56.141 [2024-07-26 23:27:47.779532] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:56.141 [2024-07-26 23:27:47.779647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:19:56.401 [2024-07-26 23:27:47.951274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.660 [2024-07-26 23:27:48.159352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.920 [2024-07-26 23:27:48.560160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.920 [2024-07-26 23:27:48.560220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.181 [2024-07-26 23:27:48.714279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.714324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.181 [2024-07-26 23:27:48.714339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.181 [2024-07-26 23:27:48.714352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.717298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.717336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.181 [2024-07-26 23:27:48.717348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:19:57.181 [2024-07-26 23:27:48.717360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.717447] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.181 [2024-07-26 23:27:48.718652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.181 [2024-07-26 23:27:48.718806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.718891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.181 [2024-07-26 23:27:48.718927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:19:57.181 [2024-07-26 23:27:48.718956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.720467] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.181 [2024-07-26 23:27:48.739721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.739860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.181 [2024-07-26 23:27:48.740114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:19:57.181 [2024-07-26 23:27:48.740133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.740227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.740242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.181 [2024-07-26 23:27:48.740256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:57.181 [2024-07-26 23:27:48.740267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.746995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.747019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.181 [2024-07-26 23:27:48.747031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:19:57.181 [2024-07-26 23:27:48.747040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.747138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.747154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.181 [2024-07-26 23:27:48.747165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:57.181 [2024-07-26 23:27:48.747174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.747201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.747212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.181 [2024-07-26 23:27:48.747221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.181 [2024-07-26 23:27:48.747231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.747256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.181 [2024-07-26 23:27:48.752645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.752678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.181 [2024-07-26 23:27:48.752690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.407 ms 00:19:57.181 [2024-07-26 23:27:48.752699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.752762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.752777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.181 [2024-07-26 23:27:48.752787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:57.181 [2024-07-26 23:27:48.752797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.752816] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.181 [2024-07-26 23:27:48.752837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:57.181 [2024-07-26 23:27:48.752867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.181 [2024-07-26 23:27:48.752884] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:57.181 [2024-07-26 23:27:48.752946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:57.181 [2024-07-26 23:27:48.752958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.181 [2024-07-26 23:27:48.752990] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:57.181 [2024-07-26 23:27:48.753003] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.181 [2024-07-26 23:27:48.753014] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.181 [2024-07-26 23:27:48.753025] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.181 [2024-07-26 23:27:48.753034] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.181 [2024-07-26 23:27:48.753043] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:57.181 [2024-07-26 23:27:48.753052] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:57.181 [2024-07-26 23:27:48.753062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.753074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.181 [2024-07-26 23:27:48.753084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:57.181 [2024-07-26 23:27:48.753114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.753171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.181 [2024-07-26 23:27:48.753182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.181 [2024-07-26 23:27:48.753191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:57.181 [2024-07-26 23:27:48.753200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.181 [2024-07-26 23:27:48.753260] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.181 [2024-07-26 23:27:48.753271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.181 [2024-07-26 23:27:48.753281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.181 [2024-07-26 23:27:48.753293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.181 [2024-07-26 23:27:48.753302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.181 [2024-07-26 23:27:48.753311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.181 [2024-07-26 23:27:48.753320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.181 [2024-07-26 23:27:48.753329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.181 [2024-07-26 23:27:48.753339] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.181 [2024-07-26 23:27:48.753348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.181 [2024-07-26 23:27:48.753357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.181 [2024-07-26 23:27:48.753366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.181 [2024-07-26 23:27:48.753374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.181 [2024-07-26 23:27:48.753383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.181 [2024-07-26 23:27:48.753392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:57.181 [2024-07-26 23:27:48.753401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.181 [2024-07-26 23:27:48.753410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.181 [2024-07-26 23:27:48.753418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:57.181 [2024-07-26 23:27:48.753427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.181 [2024-07-26 23:27:48.753444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:57.181 [2024-07-26 23:27:48.753452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:57.182 [2024-07-26 23:27:48.753461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.182 [2024-07-26 23:27:48.753478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.182 [2024-07-26 23:27:48.753503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.182 [2024-07-26 23:27:48.753529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753545] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.182 [2024-07-26 23:27:48.753553] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.182 [2024-07-26 23:27:48.753578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.182 [2024-07-26 23:27:48.753594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.182 [2024-07-26 23:27:48.753602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:57.182 [2024-07-26 23:27:48.753611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.182 [2024-07-26 23:27:48.753619] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.182 [2024-07-26 23:27:48.753629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.182 [2024-07-26 23:27:48.753638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.182 [2024-07-26 23:27:48.753656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.182 [2024-07-26 23:27:48.753665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.182 [2024-07-26 23:27:48.753673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.182 [2024-07-26 23:27:48.753682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.182 [2024-07-26 23:27:48.753690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.182 [2024-07-26 23:27:48.753699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.182 [2024-07-26 23:27:48.753708] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.182 [2024-07-26 23:27:48.753722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.182 [2024-07-26 23:27:48.753733] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.182 [2024-07-26 23:27:48.753742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:57.182 [2024-07-26 23:27:48.753751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.182 [2024-07-26 23:27:48.753760] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:57.182 [2024-07-26 23:27:48.753769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:57.182 [2024-07-26 23:27:48.753779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:57.182 [2024-07-26 23:27:48.753789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:57.182 [2024-07-26 23:27:48.753799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:57.182 [2024-07-26 23:27:48.753809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:57.182 [2024-07-26 23:27:48.753819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:57.182 [2024-07-26 23:27:48.753828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:57.182 [2024-07-26 23:27:48.753837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:57.182 [2024-07-26 23:27:48.753847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:57.182 [2024-07-26 23:27:48.753858] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.182 [2024-07-26 23:27:48.753869] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.182 [2024-07-26 23:27:48.753879] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.182 [2024-07-26 23:27:48.753888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.182 [2024-07-26 23:27:48.753898] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.182 [2024-07-26 23:27:48.753908] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.182 [2024-07-26 23:27:48.753919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.753935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.182 [2024-07-26 23:27:48.753945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:19:57.182 [2024-07-26 23:27:48.753954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.777100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.777134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.182 [2024-07-26 23:27:48.777147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.129 ms 00:19:57.182 [2024-07-26 23:27:48.777157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.777259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.777271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.182 [2024-07-26 23:27:48.777282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:57.182 [2024-07-26 23:27:48.777291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.856629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.856664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.182 [2024-07-26 23:27:48.856679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.445 ms 00:19:57.182 [2024-07-26 23:27:48.856689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.856764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.856776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.182 [2024-07-26 23:27:48.856786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.182 [2024-07-26 23:27:48.856796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.857267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.857281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.182 [2024-07-26 23:27:48.857291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:19:57.182 [2024-07-26 23:27:48.857301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.857406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.857419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.182 [2024-07-26 23:27:48.857430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:57.182 [2024-07-26 23:27:48.857439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.877683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.877717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.182 [2024-07-26 23:27:48.877730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.255 ms 00:19:57.182 [2024-07-26 23:27:48.877740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.895809] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:57.182 [2024-07-26 23:27:48.895848] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.182 [2024-07-26 23:27:48.895863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.895873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.182 [2024-07-26 23:27:48.895884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:19:57.182 [2024-07-26 23:27:48.895893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.182 [2024-07-26 23:27:48.924642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.182 [2024-07-26 23:27:48.924677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.182 [2024-07-26 23:27:48.924690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.693 ms 00:19:57.182 [2024-07-26 23:27:48.924705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-07-26 23:27:48.942917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.443 [2024-07-26 23:27:48.942954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.443 [2024-07-26 23:27:48.942975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.169 ms 00:19:57.443 [2024-07-26 23:27:48.942985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-07-26 23:27:48.960305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.443 [2024-07-26 23:27:48.960354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.443 [2024-07-26 23:27:48.960366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.277 ms 00:19:57.443 [2024-07-26 23:27:48.960390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-07-26 23:27:48.960812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.443 [2024-07-26 23:27:48.960826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.443 [2024-07-26 23:27:48.960837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:57.443 [2024-07-26 23:27:48.960846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-07-26 23:27:49.046076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.443 [2024-07-26 23:27:49.046117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.443 [2024-07-26 23:27:49.046131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.344 ms 00:19:57.443 [2024-07-26 23:27:49.046141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.443 [2024-07-26 23:27:49.057187] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.443 [2024-07-26 23:27:49.072595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.072636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.444 [2024-07-26 23:27:49.072649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.416 ms 00:19:57.444 [2024-07-26 23:27:49.072659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.072749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.072762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.444 [2024-07-26 23:27:49.072773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.444 [2024-07-26 23:27:49.072782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.072837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.072852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.444 [2024-07-26 23:27:49.072862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:57.444 [2024-07-26 23:27:49.072870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.074832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.074863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:57.444 [2024-07-26 23:27:49.074874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:19:57.444 [2024-07-26 23:27:49.074884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.074914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.074925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.444 [2024-07-26 23:27:49.074936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:57.444 [2024-07-26 23:27:49.074949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.075010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.444 [2024-07-26 23:27:49.075021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.075031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.444 [2024-07-26 23:27:49.075048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:57.444 [2024-07-26 23:27:49.075057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.111168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.111203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.444 [2024-07-26 23:27:49.111223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.148 ms 00:19:57.444 [2024-07-26 23:27:49.111233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.111334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.444 [2024-07-26 23:27:49.111346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.444 [2024-07-26 23:27:49.111357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:57.444 [2024-07-26 23:27:49.111366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.444 [2024-07-26 23:27:49.112253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.444 [2024-07-26 23:27:49.117099] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.308 ms, result 0 00:19:57.444 [2024-07-26 23:27:49.118012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.444 [2024-07-26 23:27:49.134928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.703  Copying: 4096/4096 [kB] (average 20 MBps)[2024-07-26 23:27:49.336239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.704 [2024-07-26 23:27:49.349090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.349123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.704 [2024-07-26 23:27:49.349135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:57.704 [2024-07-26 23:27:49.349150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.349170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.704 [2024-07-26 23:27:49.352492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.352516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.704 [2024-07-26 23:27:49.352527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:19:57.704 [2024-07-26 23:27:49.352536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.354611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.354648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.704 [2024-07-26 23:27:49.354660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:19:57.704 [2024-07-26 23:27:49.354671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.357856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.357890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.704 [2024-07-26 23:27:49.357901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:19:57.704 [2024-07-26 23:27:49.357910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.363214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.363244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:57.704 [2024-07-26 23:27:49.363254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:19:57.704 [2024-07-26 23:27:49.363263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.397203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.397235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.704 [2024-07-26 23:27:49.397247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.931 ms 00:19:57.704 [2024-07-26 23:27:49.397255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.417868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.417904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.704 [2024-07-26 23:27:49.417921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.584 ms 00:19:57.704 [2024-07-26 23:27:49.417929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.418072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.418085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.704 [2024-07-26 23:27:49.418095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:57.704 [2024-07-26 23:27:49.418104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.704 [2024-07-26 23:27:49.453035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.704 [2024-07-26 23:27:49.453067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:57.704 [2024-07-26 23:27:49.453089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.972 ms 00:19:57.704 [2024-07-26 23:27:49.453098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.964 [2024-07-26 23:27:49.488621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.964 [2024-07-26 23:27:49.488656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:57.964 [2024-07-26 23:27:49.488667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.520 ms 00:19:57.964 [2024-07-26 23:27:49.488676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.964 [2024-07-26 23:27:49.523057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.964 [2024-07-26 23:27:49.523089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.964 [2024-07-26 23:27:49.523102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.377 ms 00:19:57.964 [2024-07-26 23:27:49.523110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.964 [2024-07-26 23:27:49.558668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.965 [2024-07-26 23:27:49.558700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.965 [2024-07-26 23:27:49.558711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.533 ms 00:19:57.965 [2024-07-26 23:27:49.558720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.965 [2024-07-26 23:27:49.558780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.965 [2024-07-26 23:27:49.558796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.558990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.965 [2024-07-26 23:27:49.559644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.966 [2024-07-26 23:27:49.559795] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.966 [2024-07-26 23:27:49.559814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:19:57.966 [2024-07-26 23:27:49.559825] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.966 [2024-07-26 23:27:49.559833] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.966 [2024-07-26 23:27:49.559842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.966 [2024-07-26 23:27:49.559851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.966 [2024-07-26 23:27:49.559860] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.966 [2024-07-26 23:27:49.559869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.966 [2024-07-26 23:27:49.559878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.966 [2024-07-26 23:27:49.559886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.966 [2024-07-26 23:27:49.559894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.966 [2024-07-26 23:27:49.559903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.966 [2024-07-26 23:27:49.559915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.966 [2024-07-26 23:27:49.559925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:19:57.966 [2024-07-26 23:27:49.559934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.578257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.966 [2024-07-26 23:27:49.578287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.966 [2024-07-26 23:27:49.578299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.323 ms 00:19:57.966 [2024-07-26 23:27:49.578307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.578569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.966 [2024-07-26 23:27:49.578580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.966 [2024-07-26 23:27:49.578590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:19:57.966 [2024-07-26 23:27:49.578599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.633118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.966 [2024-07-26 23:27:49.633153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.966 [2024-07-26 23:27:49.633165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.966 [2024-07-26 23:27:49.633175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.633253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.966 [2024-07-26 23:27:49.633264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.966 [2024-07-26 23:27:49.633274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.966 [2024-07-26 23:27:49.633282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.633323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.966 [2024-07-26 23:27:49.633333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.966 [2024-07-26 23:27:49.633343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.966 [2024-07-26 23:27:49.633352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.966 [2024-07-26 23:27:49.633372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.966 [2024-07-26 23:27:49.633383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.966 [2024-07-26 23:27:49.633392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.966 [2024-07-26 23:27:49.633401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.742097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.742138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.226 [2024-07-26 23:27:49.742151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.742161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.226 [2024-07-26 23:27:49.784207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.226 [2024-07-26 23:27:49.784296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.226 [2024-07-26 23:27:49.784358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.226 [2024-07-26 23:27:49.784491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784500] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.226 [2024-07-26 23:27:49.784558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.226 [2024-07-26 23:27:49.784624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.226 [2024-07-26 23:27:49.784688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.226 [2024-07-26 23:27:49.784701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.226 [2024-07-26 23:27:49.784713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.226 [2024-07-26 23:27:49.784844] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.459 ms, result 0 00:19:59.164 00:19:59.164 00:19:59.164 23:27:50 -- ftl/trim.sh@93 -- # svcpid=74064 00:19:59.164 23:27:50 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:59.164 23:27:50 -- ftl/trim.sh@94 -- # waitforlisten 74064 00:19:59.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.164 23:27:50 -- common/autotest_common.sh@819 -- # '[' -z 74064 ']' 00:19:59.164 23:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.164 23:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.164 23:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.164 23:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.164 23:27:50 -- common/autotest_common.sh@10 -- # set +x 00:19:59.423 [2024-07-26 23:27:50.992718] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:59.423 [2024-07-26 23:27:50.993417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:19:59.423 [2024-07-26 23:27:51.159797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.682 [2024-07-26 23:27:51.366474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:59.683 [2024-07-26 23:27:51.366889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.112 23:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.112 23:27:52 -- common/autotest_common.sh@852 -- # return 0 00:20:01.112 23:27:52 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:01.112 [2024-07-26 23:27:52.624945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.112 [2024-07-26 23:27:52.625015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.112 [2024-07-26 23:27:52.795175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.795222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.112 [2024-07-26 23:27:52.795239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:01.112 [2024-07-26 23:27:52.795249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.798209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.798247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.112 [2024-07-26 23:27:52.798262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:20:01.112 [2024-07-26 23:27:52.798272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.798365] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.112 [2024-07-26 23:27:52.799560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.112 [2024-07-26 23:27:52.799595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.799606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.112 [2024-07-26 23:27:52.799619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:20:01.112 [2024-07-26 23:27:52.799629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.801123] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:01.112 [2024-07-26 23:27:52.819860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.819905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:01.112 [2024-07-26 23:27:52.819918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:20:01.112 [2024-07-26 23:27:52.819930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.820059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.820078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:01.112 [2024-07-26 23:27:52.820090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:01.112 [2024-07-26 23:27:52.820103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.826757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.826788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.112 [2024-07-26 23:27:52.826799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:20:01.112 [2024-07-26 23:27:52.826814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.826896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.826912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.112 [2024-07-26 23:27:52.826922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:01.112 [2024-07-26 23:27:52.826934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.826961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.826993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.112 [2024-07-26 23:27:52.827003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:01.112 [2024-07-26 23:27:52.827015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.827041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.112 [2024-07-26 23:27:52.832606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.832638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.112 [2024-07-26 23:27:52.832652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:20:01.112 [2024-07-26 23:27:52.832662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.832731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.832754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.112 [2024-07-26 23:27:52.832767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:01.112 [2024-07-26 23:27:52.832776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.832800] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.112 [2024-07-26 23:27:52.832823] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:01.112 [2024-07-26 23:27:52.832857] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.112 [2024-07-26 23:27:52.832874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:01.112 [2024-07-26 23:27:52.832940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:01.112 [2024-07-26 23:27:52.832953] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.112 [2024-07-26 23:27:52.832968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:01.112 [2024-07-26 23:27:52.833004] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833038] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833049] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.112 [2024-07-26 23:27:52.833062] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.112 [2024-07-26 23:27:52.833072] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:01.112 [2024-07-26 23:27:52.833087] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:01.112 [2024-07-26 23:27:52.833115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.833128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.112 [2024-07-26 23:27:52.833138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:01.112 [2024-07-26 23:27:52.833150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.833207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.112 [2024-07-26 23:27:52.833221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.112 [2024-07-26 23:27:52.833234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:01.112 [2024-07-26 23:27:52.833247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.112 [2024-07-26 23:27:52.833314] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.112 [2024-07-26 23:27:52.833330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.112 [2024-07-26 23:27:52.833342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.112 [2024-07-26 23:27:52.833379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833405] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.112 [2024-07-26 23:27:52.833415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.112 [2024-07-26 23:27:52.833436] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.112 [2024-07-26 23:27:52.833448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.112 [2024-07-26 23:27:52.833459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.112 [2024-07-26 23:27:52.833470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.112 [2024-07-26 23:27:52.833480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:01.112 [2024-07-26 23:27:52.833492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.112 [2024-07-26 23:27:52.833513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:01.112 [2024-07-26 23:27:52.833522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:01.112 [2024-07-26 23:27:52.833543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:01.112 [2024-07-26 23:27:52.833555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.112 [2024-07-26 23:27:52.833579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.112 [2024-07-26 23:27:52.833608] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833627] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.112 [2024-07-26 23:27:52.833639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.112 [2024-07-26 23:27:52.833672] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.112 [2024-07-26 23:27:52.833680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:01.112 [2024-07-26 23:27:52.833691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.113 [2024-07-26 23:27:52.833700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.113 [2024-07-26 23:27:52.833712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.113 [2024-07-26 23:27:52.833720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.113 [2024-07-26 23:27:52.833731] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.113 [2024-07-26 23:27:52.833740] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:01.113 [2024-07-26 23:27:52.833754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.113 [2024-07-26 23:27:52.833763] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.113 [2024-07-26 23:27:52.833776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.113 [2024-07-26 23:27:52.833791] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.113 [2024-07-26 23:27:52.833805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.113 [2024-07-26 23:27:52.833815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.113 [2024-07-26 23:27:52.833827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.113 [2024-07-26 23:27:52.833835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.113 [2024-07-26 23:27:52.833847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.113 [2024-07-26 23:27:52.833856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.113 [2024-07-26 23:27:52.833868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.113 [2024-07-26 23:27:52.833881] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.113 [2024-07-26 23:27:52.833896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.113 [2024-07-26 23:27:52.833907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.113 [2024-07-26 23:27:52.833920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:01.113 [2024-07-26 23:27:52.833930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.113 [2024-07-26 23:27:52.833946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:01.113 [2024-07-26 23:27:52.833957] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:01.113 [2024-07-26 23:27:52.833969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:01.113 [2024-07-26 23:27:52.833989] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:01.113 [2024-07-26 23:27:52.834004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:01.113 [2024-07-26 23:27:52.834015] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:01.113 [2024-07-26 23:27:52.834027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:01.113 [2024-07-26 23:27:52.834037] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:01.113 [2024-07-26 23:27:52.834051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:01.113 [2024-07-26 23:27:52.834062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:01.113 [2024-07-26 23:27:52.834075] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.113 [2024-07-26 23:27:52.834086] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.113 [2024-07-26 23:27:52.834099] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.113 [2024-07-26 23:27:52.834110] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.113 [2024-07-26 23:27:52.834123] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.113 [2024-07-26 23:27:52.834133] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.113 [2024-07-26 23:27:52.834147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.113 [2024-07-26 23:27:52.834157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.113 [2024-07-26 23:27:52.834169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:20:01.113 [2024-07-26 23:27:52.834181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.113 [2024-07-26 23:27:52.858512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.113 [2024-07-26 23:27:52.858544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.113 [2024-07-26 23:27:52.858560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.312 ms 00:20:01.113 [2024-07-26 23:27:52.858570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.113 [2024-07-26 23:27:52.858681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.113 [2024-07-26 23:27:52.858696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.113 [2024-07-26 23:27:52.858709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:01.113 [2024-07-26 23:27:52.858719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.909408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.909441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.400 [2024-07-26 23:27:52.909456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.747 ms 00:20:01.400 [2024-07-26 23:27:52.909466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.909528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.909539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.400 [2024-07-26 23:27:52.909552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.400 [2024-07-26 23:27:52.909565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.910017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.910031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.400 [2024-07-26 23:27:52.910046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:20:01.400 [2024-07-26 23:27:52.910055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.910159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.910172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.400 [2024-07-26 23:27:52.910184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:01.400 [2024-07-26 23:27:52.910194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.932362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.932395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.400 [2024-07-26 23:27:52.932409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.178 ms 00:20:01.400 [2024-07-26 23:27:52.932419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.951363] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:01.400 [2024-07-26 23:27:52.951400] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.400 [2024-07-26 23:27:52.951416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.951427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.400 [2024-07-26 23:27:52.951441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.920 ms 00:20:01.400 [2024-07-26 23:27:52.951450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.980244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.980280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.400 [2024-07-26 23:27:52.980298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.762 ms 00:20:01.400 [2024-07-26 23:27:52.980308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:52.997549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:52.997582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.400 [2024-07-26 23:27:52.997597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:20:01.400 [2024-07-26 23:27:52.997606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.015635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.015669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.400 [2024-07-26 23:27:53.015687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.985 ms 00:20:01.400 [2024-07-26 23:27:53.015697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.016210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.016229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.400 [2024-07-26 23:27:53.016243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:01.400 [2024-07-26 23:27:53.016254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.104987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.105027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.400 [2024-07-26 23:27:53.105044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.845 ms 00:20:01.400 [2024-07-26 23:27:53.105057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.116283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.400 [2024-07-26 23:27:53.131670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.131712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.400 [2024-07-26 23:27:53.131725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.584 ms 00:20:01.400 [2024-07-26 23:27:53.131737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.131811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.131827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.400 [2024-07-26 23:27:53.131840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:01.400 [2024-07-26 23:27:53.131852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.131900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.131914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.400 [2024-07-26 23:27:53.131924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:01.400 [2024-07-26 23:27:53.131944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.133897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.133931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:01.400 [2024-07-26 23:27:53.133942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:20:01.400 [2024-07-26 23:27:53.133954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.134012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.134029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.400 [2024-07-26 23:27:53.134040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:01.400 [2024-07-26 23:27:53.134055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.400 [2024-07-26 23:27:53.134095] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.400 [2024-07-26 23:27:53.134113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.400 [2024-07-26 23:27:53.134124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.400 [2024-07-26 23:27:53.134136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:01.400 [2024-07-26 23:27:53.134147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.658 [2024-07-26 23:27:53.169602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.658 [2024-07-26 23:27:53.169741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.658 [2024-07-26 23:27:53.169858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.484 ms 00:20:01.658 [2024-07-26 23:27:53.169895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.658 [2024-07-26 23:27:53.170023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.659 [2024-07-26 23:27:53.170139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.659 [2024-07-26 23:27:53.170190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:01.659 [2024-07-26 23:27:53.170219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.659 [2024-07-26 23:27:53.171161] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.659 [2024-07-26 23:27:53.176128] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.259 ms, result 0 00:20:01.659 [2024-07-26 23:27:53.177405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.659 Some configs were skipped because the RPC state that can call them passed over. 00:20:01.659 23:27:53 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:01.917 [2024-07-26 23:27:53.422181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.917 [2024-07-26 23:27:53.422228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:01.917 [2024-07-26 23:27:53.422242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.675 ms 00:20:01.917 [2024-07-26 23:27:53.422255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.917 [2024-07-26 23:27:53.422287] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 35.782 ms, result 0 00:20:01.917 true 00:20:01.917 23:27:53 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:01.917 [2024-07-26 23:27:53.635898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.917 [2024-07-26 23:27:53.635933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:01.917 [2024-07-26 23:27:53.635956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.823 ms 00:20:01.917 [2024-07-26 23:27:53.635978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.917 [2024-07-26 23:27:53.636015] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 33.939 ms, result 0 00:20:01.917 true 00:20:02.176 23:27:53 -- ftl/trim.sh@102 -- # killprocess 74064 00:20:02.176 23:27:53 -- common/autotest_common.sh@926 -- # '[' -z 74064 ']' 00:20:02.176 23:27:53 -- common/autotest_common.sh@930 -- # kill -0 74064 00:20:02.176 23:27:53 -- common/autotest_common.sh@931 -- # uname 00:20:02.176 23:27:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:02.176 23:27:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74064 00:20:02.176 killing process with pid 74064 00:20:02.176 23:27:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:02.176 23:27:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:02.176 23:27:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74064' 00:20:02.176 23:27:53 -- common/autotest_common.sh@945 -- # kill 74064 00:20:02.176 23:27:53 -- common/autotest_common.sh@950 -- # wait 74064 00:20:03.114 [2024-07-26 23:27:54.716393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.716450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.114 [2024-07-26 23:27:54.716465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.114 [2024-07-26 23:27:54.716476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.716499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:03.114 [2024-07-26 23:27:54.719804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.719836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.114 [2024-07-26 23:27:54.719852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:20:03.114 [2024-07-26 23:27:54.719862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.720125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.720140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.114 [2024-07-26 23:27:54.720152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:20:03.114 [2024-07-26 23:27:54.720163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.723357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.723395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.114 [2024-07-26 23:27:54.723409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:20:03.114 [2024-07-26 23:27:54.723421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.728786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.728820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:03.114 [2024-07-26 23:27:54.728834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.329 ms 00:20:03.114 [2024-07-26 23:27:54.728844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.743556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.743589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.114 [2024-07-26 23:27:54.743615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.676 ms 00:20:03.114 [2024-07-26 23:27:54.743624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.754298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.754334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.114 [2024-07-26 23:27:54.754352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.635 ms 00:20:03.114 [2024-07-26 23:27:54.754361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.754494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.754507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.114 [2024-07-26 23:27:54.754520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:03.114 [2024-07-26 23:27:54.754530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.769595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.769627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:03.114 [2024-07-26 23:27:54.769641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.068 ms 00:20:03.114 [2024-07-26 23:27:54.769650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.783870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.783900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:03.114 [2024-07-26 23:27:54.783919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.202 ms 00:20:03.114 [2024-07-26 23:27:54.783928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.798078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.798119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.114 [2024-07-26 23:27:54.798135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.113 ms 00:20:03.114 [2024-07-26 23:27:54.798144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.812117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.114 [2024-07-26 23:27:54.812149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.114 [2024-07-26 23:27:54.812163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.933 ms 00:20:03.114 [2024-07-26 23:27:54.812172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.114 [2024-07-26 23:27:54.812209] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.114 [2024-07-26 23:27:54.812225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.114 [2024-07-26 23:27:54.812542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.812993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.115 [2024-07-26 23:27:54.813437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.115 [2024-07-26 23:27:54.813460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:20:03.115 [2024-07-26 23:27:54.813474] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.115 [2024-07-26 23:27:54.813485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.115 [2024-07-26 23:27:54.813495] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.115 [2024-07-26 23:27:54.813507] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.115 [2024-07-26 23:27:54.813516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.115 [2024-07-26 23:27:54.813528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.115 [2024-07-26 23:27:54.813537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.115 [2024-07-26 23:27:54.813548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.115 [2024-07-26 23:27:54.813556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.115 [2024-07-26 23:27:54.813568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.115 [2024-07-26 23:27:54.813578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.115 [2024-07-26 23:27:54.813591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:20:03.115 [2024-07-26 23:27:54.813600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.115 [2024-07-26 23:27:54.831740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.115 [2024-07-26 23:27:54.831772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.115 [2024-07-26 23:27:54.831789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:20:03.115 [2024-07-26 23:27:54.831798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.115 [2024-07-26 23:27:54.832089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.115 [2024-07-26 23:27:54.832103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.116 [2024-07-26 23:27:54.832116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:03.116 [2024-07-26 23:27:54.832125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:54.894160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:54.894192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.375 [2024-07-26 23:27:54.894206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:54.894215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:54.894290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:54.894302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.375 [2024-07-26 23:27:54.894314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:54.894325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:54.894377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:54.894390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.375 [2024-07-26 23:27:54.894405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:54.894415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:54.894436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:54.894446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.375 [2024-07-26 23:27:54.894457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:54.894466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.002988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.003035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.375 [2024-07-26 23:27:55.003050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.003061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.375 [2024-07-26 23:27:55.044353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.375 [2024-07-26 23:27:55.044451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.375 [2024-07-26 23:27:55.044513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.375 [2024-07-26 23:27:55.044663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.375 [2024-07-26 23:27:55.044737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.375 [2024-07-26 23:27:55.044814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.044868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.375 [2024-07-26 23:27:55.044881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.375 [2024-07-26 23:27:55.044893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.375 [2024-07-26 23:27:55.044904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.375 [2024-07-26 23:27:55.045079] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.196 ms, result 0 00:20:04.752 23:27:56 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:04.752 [2024-07-26 23:27:56.317449] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:04.752 [2024-07-26 23:27:56.317565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74140 ] 00:20:04.752 [2024-07-26 23:27:56.490185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.010 [2024-07-26 23:27:56.702256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.579 [2024-07-26 23:27:57.083493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.579 [2024-07-26 23:27:57.083559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.579 [2024-07-26 23:27:57.238021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.238065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:05.579 [2024-07-26 23:27:57.238080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:05.579 [2024-07-26 23:27:57.238093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.241035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.241071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.579 [2024-07-26 23:27:57.241082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:20:05.579 [2024-07-26 23:27:57.241095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.241180] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:05.579 [2024-07-26 23:27:57.242304] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:05.579 [2024-07-26 23:27:57.242338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.242352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.579 [2024-07-26 23:27:57.242363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:20:05.579 [2024-07-26 23:27:57.242372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.243808] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:05.579 [2024-07-26 23:27:57.262982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.263018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:05.579 [2024-07-26 23:27:57.263030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.206 ms 00:20:05.579 [2024-07-26 23:27:57.263040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.263128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.263141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:05.579 [2024-07-26 23:27:57.263155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:05.579 [2024-07-26 23:27:57.263164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.269877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.269902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.579 [2024-07-26 23:27:57.269914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:20:05.579 [2024-07-26 23:27:57.269924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.270037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.270055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.579 [2024-07-26 23:27:57.270066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:05.579 [2024-07-26 23:27:57.270076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.270105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.270115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:05.579 [2024-07-26 23:27:57.270126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:05.579 [2024-07-26 23:27:57.270136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.270160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:05.579 [2024-07-26 23:27:57.275671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.275703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.579 [2024-07-26 23:27:57.275714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:20:05.579 [2024-07-26 23:27:57.275723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.275787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.579 [2024-07-26 23:27:57.275802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:05.579 [2024-07-26 23:27:57.275812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:05.579 [2024-07-26 23:27:57.275821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.579 [2024-07-26 23:27:57.275840] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:05.579 [2024-07-26 23:27:57.275863] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:05.579 [2024-07-26 23:27:57.275894] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:05.579 [2024-07-26 23:27:57.275911] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:05.579 [2024-07-26 23:27:57.275991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:05.579 [2024-07-26 23:27:57.276005] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:05.580 [2024-07-26 23:27:57.276017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:05.580 [2024-07-26 23:27:57.276030] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276041] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276052] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:05.580 [2024-07-26 23:27:57.276061] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:05.580 [2024-07-26 23:27:57.276071] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:05.580 [2024-07-26 23:27:57.276081] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:05.580 [2024-07-26 23:27:57.276091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.580 [2024-07-26 23:27:57.276105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:05.580 [2024-07-26 23:27:57.276114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:20:05.580 [2024-07-26 23:27:57.276124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.580 [2024-07-26 23:27:57.276181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.580 [2024-07-26 23:27:57.276192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:05.580 [2024-07-26 23:27:57.276202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:05.580 [2024-07-26 23:27:57.276210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.580 [2024-07-26 23:27:57.276270] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:05.580 [2024-07-26 23:27:57.276284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:05.580 [2024-07-26 23:27:57.276294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:05.580 [2024-07-26 23:27:57.276325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:05.580 [2024-07-26 23:27:57.276354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.580 [2024-07-26 23:27:57.276373] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:05.580 [2024-07-26 23:27:57.276381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:05.580 [2024-07-26 23:27:57.276391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.580 [2024-07-26 23:27:57.276400] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:05.580 [2024-07-26 23:27:57.276409] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:05.580 [2024-07-26 23:27:57.276417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276425] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:05.580 [2024-07-26 23:27:57.276434] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:05.580 [2024-07-26 23:27:57.276442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:05.580 [2024-07-26 23:27:57.276470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:05.580 [2024-07-26 23:27:57.276478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:05.580 [2024-07-26 23:27:57.276495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276512] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:05.580 [2024-07-26 23:27:57.276521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:05.580 [2024-07-26 23:27:57.276546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:05.580 [2024-07-26 23:27:57.276572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276588] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:05.580 [2024-07-26 23:27:57.276596] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.580 [2024-07-26 23:27:57.276612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:05.580 [2024-07-26 23:27:57.276620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:05.580 [2024-07-26 23:27:57.276629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.580 [2024-07-26 23:27:57.276637] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:05.580 [2024-07-26 23:27:57.276648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:05.580 [2024-07-26 23:27:57.276657] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.580 [2024-07-26 23:27:57.276676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:05.580 [2024-07-26 23:27:57.276684] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:05.580 [2024-07-26 23:27:57.276693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:05.580 [2024-07-26 23:27:57.276701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:05.580 [2024-07-26 23:27:57.276709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:05.580 [2024-07-26 23:27:57.276718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:05.580 [2024-07-26 23:27:57.276727] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:05.580 [2024-07-26 23:27:57.276742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.580 [2024-07-26 23:27:57.276752] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:05.580 [2024-07-26 23:27:57.276762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:05.580 [2024-07-26 23:27:57.276771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:05.580 [2024-07-26 23:27:57.276781] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:05.580 [2024-07-26 23:27:57.276791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:05.580 [2024-07-26 23:27:57.276801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:05.580 [2024-07-26 23:27:57.276811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:05.580 [2024-07-26 23:27:57.276820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:05.580 [2024-07-26 23:27:57.276830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:05.580 [2024-07-26 23:27:57.276839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:05.580 [2024-07-26 23:27:57.276849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:05.580 [2024-07-26 23:27:57.276858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:05.580 [2024-07-26 23:27:57.276868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:05.580 [2024-07-26 23:27:57.276877] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:05.580 [2024-07-26 23:27:57.276888] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.580 [2024-07-26 23:27:57.276898] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:05.580 [2024-07-26 23:27:57.276907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:05.580 [2024-07-26 23:27:57.276917] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:05.580 [2024-07-26 23:27:57.276926] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:05.580 [2024-07-26 23:27:57.276936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.580 [2024-07-26 23:27:57.276950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:05.580 [2024-07-26 23:27:57.276960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:20:05.580 [2024-07-26 23:27:57.277326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.580 [2024-07-26 23:27:57.300527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.580 [2024-07-26 23:27:57.300561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.580 [2024-07-26 23:27:57.300574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.157 ms 00:20:05.580 [2024-07-26 23:27:57.300583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.580 [2024-07-26 23:27:57.300686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.580 [2024-07-26 23:27:57.300698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:05.580 [2024-07-26 23:27:57.300708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:05.580 [2024-07-26 23:27:57.300718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.375632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.375663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.840 [2024-07-26 23:27:57.375676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.014 ms 00:20:05.840 [2024-07-26 23:27:57.375686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.375742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.375753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.840 [2024-07-26 23:27:57.375763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:05.840 [2024-07-26 23:27:57.375773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.376231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.376246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.840 [2024-07-26 23:27:57.376258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:20:05.840 [2024-07-26 23:27:57.376268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.376378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.376391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.840 [2024-07-26 23:27:57.376401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:05.840 [2024-07-26 23:27:57.376410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.399426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.399458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.840 [2024-07-26 23:27:57.399470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.032 ms 00:20:05.840 [2024-07-26 23:27:57.399480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.418440] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:05.840 [2024-07-26 23:27:57.418479] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:05.840 [2024-07-26 23:27:57.418493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.418504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:05.840 [2024-07-26 23:27:57.418515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.943 ms 00:20:05.840 [2024-07-26 23:27:57.418525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.447718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.447755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:05.840 [2024-07-26 23:27:57.447768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.169 ms 00:20:05.840 [2024-07-26 23:27:57.447784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.465455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.465490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:05.840 [2024-07-26 23:27:57.465501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.628 ms 00:20:05.840 [2024-07-26 23:27:57.465511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.482878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.482923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:05.840 [2024-07-26 23:27:57.482935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.326 ms 00:20:05.840 [2024-07-26 23:27:57.482944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.483400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.483416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.840 [2024-07-26 23:27:57.483427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:20:05.840 [2024-07-26 23:27:57.483436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.840 [2024-07-26 23:27:57.569537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.840 [2024-07-26 23:27:57.569577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:05.840 [2024-07-26 23:27:57.569591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.218 ms 00:20:05.841 [2024-07-26 23:27:57.569602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.841 [2024-07-26 23:27:57.580702] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.099 [2024-07-26 23:27:57.595920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.595976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.099 [2024-07-26 23:27:57.595990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.280 ms 00:20:06.099 [2024-07-26 23:27:57.596000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.596081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.596094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.099 [2024-07-26 23:27:57.596106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.099 [2024-07-26 23:27:57.596116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.596166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.596181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.099 [2024-07-26 23:27:57.596191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:06.099 [2024-07-26 23:27:57.596200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.598100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.598130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:06.099 [2024-07-26 23:27:57.598140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:20:06.099 [2024-07-26 23:27:57.598150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.598179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.598189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.099 [2024-07-26 23:27:57.598199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.099 [2024-07-26 23:27:57.598212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.598247] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.099 [2024-07-26 23:27:57.598258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.598268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.099 [2024-07-26 23:27:57.598277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:06.099 [2024-07-26 23:27:57.598287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.633025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.633060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.099 [2024-07-26 23:27:57.633079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.771 ms 00:20:06.099 [2024-07-26 23:27:57.633089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.633188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.099 [2024-07-26 23:27:57.633200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.099 [2024-07-26 23:27:57.633211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:06.099 [2024-07-26 23:27:57.633221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.099 [2024-07-26 23:27:57.634107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.099 [2024-07-26 23:27:57.638831] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.391 ms, result 0 00:20:06.099 [2024-07-26 23:27:57.639695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.100 [2024-07-26 23:27:57.656735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.490  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 73/256 [MB] (23 MBps) Copying: 97/256 [MB] (23 MBps) Copying: 120/256 [MB] (23 MBps) Copying: 143/256 [MB] (23 MBps) Copying: 167/256 [MB] (23 MBps) Copying: 190/256 [MB] (23 MBps) Copying: 214/256 [MB] (23 MBps) Copying: 237/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-26 23:28:08.958023] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.490 [2024-07-26 23:28:08.972871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.490 [2024-07-26 23:28:08.972922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.490 [2024-07-26 23:28:08.972938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.490 [2024-07-26 23:28:08.972957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.490 [2024-07-26 23:28:08.973004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.490 [2024-07-26 23:28:08.976387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.490 [2024-07-26 23:28:08.976424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.490 [2024-07-26 23:28:08.976436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:20:17.490 [2024-07-26 23:28:08.976446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.490 [2024-07-26 23:28:08.976678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:08.976692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.491 [2024-07-26 23:28:08.976703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:20:17.491 [2024-07-26 23:28:08.976713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:08.979381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:08.979412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.491 [2024-07-26 23:28:08.979423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.654 ms 00:20:17.491 [2024-07-26 23:28:08.979432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:08.985008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:08.985051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:17.491 [2024-07-26 23:28:08.985064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.548 ms 00:20:17.491 [2024-07-26 23:28:08.985074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.023714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.023758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.491 [2024-07-26 23:28:09.023771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.626 ms 00:20:17.491 [2024-07-26 23:28:09.023782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.046924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.046985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.491 [2024-07-26 23:28:09.047005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.106 ms 00:20:17.491 [2024-07-26 23:28:09.047015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.047158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.047171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.491 [2024-07-26 23:28:09.047181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:17.491 [2024-07-26 23:28:09.047191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.084027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.084063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:17.491 [2024-07-26 23:28:09.084089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.878 ms 00:20:17.491 [2024-07-26 23:28:09.084098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.120738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.120773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:17.491 [2024-07-26 23:28:09.120786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.633 ms 00:20:17.491 [2024-07-26 23:28:09.120796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.156317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.156354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.491 [2024-07-26 23:28:09.156367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.514 ms 00:20:17.491 [2024-07-26 23:28:09.156376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.192092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.491 [2024-07-26 23:28:09.192127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.491 [2024-07-26 23:28:09.192140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.684 ms 00:20:17.491 [2024-07-26 23:28:09.192149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.491 [2024-07-26 23:28:09.192213] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.491 [2024-07-26 23:28:09.192231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.491 [2024-07-26 23:28:09.192815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.192996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.492 [2024-07-26 23:28:09.193323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.492 [2024-07-26 23:28:09.193343] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6c3633be-1fe3-48ba-8b1c-4ab3ac85cc5d 00:20:17.492 [2024-07-26 23:28:09.193353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.492 [2024-07-26 23:28:09.193363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.492 [2024-07-26 23:28:09.193372] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.492 [2024-07-26 23:28:09.193382] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.492 [2024-07-26 23:28:09.193392] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.492 [2024-07-26 23:28:09.193403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.492 [2024-07-26 23:28:09.193412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.492 [2024-07-26 23:28:09.193421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.492 [2024-07-26 23:28:09.193429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.492 [2024-07-26 23:28:09.193438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.492 [2024-07-26 23:28:09.193451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.492 [2024-07-26 23:28:09.193461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:20:17.492 [2024-07-26 23:28:09.193469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.492 [2024-07-26 23:28:09.211545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.492 [2024-07-26 23:28:09.211578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.492 [2024-07-26 23:28:09.211590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.086 ms 00:20:17.492 [2024-07-26 23:28:09.211600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.492 [2024-07-26 23:28:09.211864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.492 [2024-07-26 23:28:09.211877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.492 [2024-07-26 23:28:09.211887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:20:17.492 [2024-07-26 23:28:09.211897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.265711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.265745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.752 [2024-07-26 23:28:09.265758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.265768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.265844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.265855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.752 [2024-07-26 23:28:09.265865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.265875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.265916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.265928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.752 [2024-07-26 23:28:09.265938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.265948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.265989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.266005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.752 [2024-07-26 23:28:09.266015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.266024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.373353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.373396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.752 [2024-07-26 23:28:09.373409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.373419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.415634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.415673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.752 [2024-07-26 23:28:09.415685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.415695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.415752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.415763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.752 [2024-07-26 23:28:09.415774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.415785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.415815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.415826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.752 [2024-07-26 23:28:09.415842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.415853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.415987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.416002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.752 [2024-07-26 23:28:09.416012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.416023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.416063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.416076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.752 [2024-07-26 23:28:09.416086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.416101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.416139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.416150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.752 [2024-07-26 23:28:09.416159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.416169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.416223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.752 [2024-07-26 23:28:09.416235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.752 [2024-07-26 23:28:09.416248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.752 [2024-07-26 23:28:09.416261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.752 [2024-07-26 23:28:09.416390] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.255 ms, result 0 00:20:19.132 00:20:19.132 00:20:19.132 23:28:10 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.391 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:19.391 23:28:11 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:19.391 23:28:11 -- ftl/trim.sh@109 -- # fio_kill 00:20:19.391 23:28:11 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.391 23:28:11 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.391 23:28:11 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:19.391 23:28:11 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:19.651 23:28:11 -- ftl/trim.sh@20 -- # killprocess 74064 00:20:19.651 23:28:11 -- common/autotest_common.sh@926 -- # '[' -z 74064 ']' 00:20:19.651 23:28:11 -- common/autotest_common.sh@930 -- # kill -0 74064 00:20:19.651 Process with pid 74064 is not found 00:20:19.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74064) - No such process 00:20:19.651 23:28:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74064 is not found' 00:20:19.651 ************************************ 00:20:19.651 END TEST ftl_trim 00:20:19.651 ************************************ 00:20:19.651 00:20:19.651 real 1m13.668s 00:20:19.651 user 1m38.307s 00:20:19.651 sys 0m6.610s 00:20:19.651 23:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.651 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 23:28:11 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:19.651 23:28:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:19.651 23:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:19.651 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:20:19.651 ************************************ 00:20:19.651 START TEST ftl_restore 00:20:19.651 ************************************ 00:20:19.651 23:28:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:19.651 * Looking for test storage... 00:20:19.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.651 23:28:11 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:19.651 23:28:11 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:19.911 23:28:11 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.911 23:28:11 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.911 23:28:11 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:19.911 23:28:11 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:19.911 23:28:11 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.911 23:28:11 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:19.911 23:28:11 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:19.911 23:28:11 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.911 23:28:11 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.911 23:28:11 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:19.911 23:28:11 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:19.911 23:28:11 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:19.911 23:28:11 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:19.911 23:28:11 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:19.911 23:28:11 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:19.911 23:28:11 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.911 23:28:11 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.911 23:28:11 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:19.911 23:28:11 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:19.911 23:28:11 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:19.911 23:28:11 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:19.911 23:28:11 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:19.911 23:28:11 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:19.911 23:28:11 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:19.911 23:28:11 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:19.911 23:28:11 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.911 23:28:11 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.911 23:28:11 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.911 23:28:11 -- ftl/restore.sh@13 -- # mktemp -d 00:20:19.911 23:28:11 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NhhH1Bwi9Q 00:20:19.911 23:28:11 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:19.911 23:28:11 -- ftl/restore.sh@16 -- # case $opt in 00:20:19.911 23:28:11 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:20:19.911 23:28:11 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:19.911 23:28:11 -- ftl/restore.sh@23 -- # shift 2 00:20:19.911 23:28:11 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:20:19.911 23:28:11 -- ftl/restore.sh@25 -- # timeout=240 00:20:19.911 23:28:11 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:19.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.911 23:28:11 -- ftl/restore.sh@39 -- # svcpid=74345 00:20:19.911 23:28:11 -- ftl/restore.sh@41 -- # waitforlisten 74345 00:20:19.911 23:28:11 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.911 23:28:11 -- common/autotest_common.sh@819 -- # '[' -z 74345 ']' 00:20:19.911 23:28:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.911 23:28:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.911 23:28:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.911 23:28:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.911 23:28:11 -- common/autotest_common.sh@10 -- # set +x 00:20:19.911 [2024-07-26 23:28:11.548987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:19.911 [2024-07-26 23:28:11.549306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74345 ] 00:20:20.171 [2024-07-26 23:28:11.723313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.430 [2024-07-26 23:28:11.934363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:20.430 [2024-07-26 23:28:11.934693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.368 23:28:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.368 23:28:12 -- common/autotest_common.sh@852 -- # return 0 00:20:21.368 23:28:12 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:20:21.368 23:28:12 -- ftl/common.sh@54 -- # local name=nvme0 00:20:21.368 23:28:12 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:20:21.368 23:28:12 -- ftl/common.sh@56 -- # local size=103424 00:20:21.368 23:28:12 -- ftl/common.sh@59 -- # local base_bdev 00:20:21.368 23:28:12 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:20:21.627 23:28:13 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:21.627 23:28:13 -- ftl/common.sh@62 -- # local base_size 00:20:21.627 23:28:13 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:21.627 23:28:13 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:20:21.627 23:28:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:21.627 23:28:13 -- common/autotest_common.sh@1359 -- # local bs 00:20:21.627 23:28:13 -- common/autotest_common.sh@1360 -- # local nb 00:20:21.627 23:28:13 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:21.886 23:28:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:21.886 { 00:20:21.886 "name": "nvme0n1", 00:20:21.886 "aliases": [ 00:20:21.886 "bf74b7d7-fe81-4df7-b8f0-acbb5c7e14f5" 00:20:21.886 ], 00:20:21.886 "product_name": "NVMe disk", 00:20:21.886 "block_size": 4096, 00:20:21.886 "num_blocks": 1310720, 00:20:21.886 "uuid": "bf74b7d7-fe81-4df7-b8f0-acbb5c7e14f5", 00:20:21.886 "assigned_rate_limits": { 00:20:21.886 "rw_ios_per_sec": 0, 00:20:21.886 "rw_mbytes_per_sec": 0, 00:20:21.886 "r_mbytes_per_sec": 0, 00:20:21.886 "w_mbytes_per_sec": 0 00:20:21.886 }, 00:20:21.886 "claimed": true, 00:20:21.886 "claim_type": "read_many_write_one", 00:20:21.886 "zoned": false, 00:20:21.886 "supported_io_types": { 00:20:21.886 "read": true, 00:20:21.886 "write": true, 00:20:21.886 "unmap": true, 00:20:21.886 "write_zeroes": true, 00:20:21.886 "flush": true, 00:20:21.886 "reset": true, 00:20:21.886 "compare": true, 00:20:21.886 "compare_and_write": false, 00:20:21.886 "abort": true, 00:20:21.886 "nvme_admin": true, 00:20:21.886 "nvme_io": true 00:20:21.886 }, 00:20:21.886 "driver_specific": { 00:20:21.886 "nvme": [ 00:20:21.886 { 00:20:21.886 "pci_address": "0000:00:07.0", 00:20:21.886 "trid": { 00:20:21.886 "trtype": "PCIe", 00:20:21.886 "traddr": "0000:00:07.0" 00:20:21.886 }, 00:20:21.886 "ctrlr_data": { 00:20:21.886 "cntlid": 0, 00:20:21.886 "vendor_id": "0x1b36", 00:20:21.886 "model_number": "QEMU NVMe Ctrl", 00:20:21.886 "serial_number": "12341", 00:20:21.886 "firmware_revision": "8.0.0", 00:20:21.886 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:21.886 "oacs": { 00:20:21.886 "security": 0, 00:20:21.886 "format": 1, 00:20:21.886 "firmware": 0, 00:20:21.886 "ns_manage": 1 00:20:21.886 }, 00:20:21.886 "multi_ctrlr": false, 00:20:21.886 "ana_reporting": false 00:20:21.886 }, 00:20:21.886 "vs": { 00:20:21.886 "nvme_version": "1.4" 00:20:21.886 }, 00:20:21.886 "ns_data": { 00:20:21.886 "id": 1, 00:20:21.886 "can_share": false 00:20:21.886 } 00:20:21.886 } 00:20:21.886 ], 00:20:21.886 "mp_policy": "active_passive" 00:20:21.886 } 00:20:21.886 } 00:20:21.886 ]' 00:20:21.886 23:28:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:21.886 23:28:13 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:21.886 23:28:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:21.886 23:28:13 -- common/autotest_common.sh@1363 -- # nb=1310720 00:20:21.886 23:28:13 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:20:21.886 23:28:13 -- common/autotest_common.sh@1367 -- # echo 5120 00:20:21.886 23:28:13 -- ftl/common.sh@63 -- # base_size=5120 00:20:21.886 23:28:13 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:21.886 23:28:13 -- ftl/common.sh@67 -- # clear_lvols 00:20:21.886 23:28:13 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:21.886 23:28:13 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:22.145 23:28:13 -- ftl/common.sh@28 -- # stores=b1cbebb8-c620-4c32-8cf3-207bf9b36ab1 00:20:22.145 23:28:13 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:22.145 23:28:13 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1cbebb8-c620-4c32-8cf3-207bf9b36ab1 00:20:22.145 23:28:13 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:22.403 23:28:14 -- ftl/common.sh@68 -- # lvs=163a7699-2707-42b2-bc57-107779331483 00:20:22.403 23:28:14 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 163a7699-2707-42b2-bc57-107779331483 00:20:22.663 23:28:14 -- ftl/restore.sh@43 -- # split_bdev=afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:20:22.663 23:28:14 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- ftl/common.sh@35 -- # local name=nvc0 00:20:22.663 23:28:14 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:20:22.663 23:28:14 -- ftl/common.sh@37 -- # local base_bdev=afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- ftl/common.sh@38 -- # local cache_size= 00:20:22.663 23:28:14 -- ftl/common.sh@41 -- # get_bdev_size afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- common/autotest_common.sh@1357 -- # local bdev_name=afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:22.663 23:28:14 -- common/autotest_common.sh@1359 -- # local bs 00:20:22.663 23:28:14 -- common/autotest_common.sh@1360 -- # local nb 00:20:22.663 23:28:14 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:22.663 23:28:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:22.663 { 00:20:22.663 "name": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:22.663 "aliases": [ 00:20:22.663 "lvs/nvme0n1p0" 00:20:22.663 ], 00:20:22.663 "product_name": "Logical Volume", 00:20:22.663 "block_size": 4096, 00:20:22.663 "num_blocks": 26476544, 00:20:22.663 "uuid": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:22.663 "assigned_rate_limits": { 00:20:22.663 "rw_ios_per_sec": 0, 00:20:22.663 "rw_mbytes_per_sec": 0, 00:20:22.663 "r_mbytes_per_sec": 0, 00:20:22.663 "w_mbytes_per_sec": 0 00:20:22.663 }, 00:20:22.663 "claimed": false, 00:20:22.663 "zoned": false, 00:20:22.663 "supported_io_types": { 00:20:22.663 "read": true, 00:20:22.663 "write": true, 00:20:22.663 "unmap": true, 00:20:22.663 "write_zeroes": true, 00:20:22.663 "flush": false, 00:20:22.663 "reset": true, 00:20:22.663 "compare": false, 00:20:22.663 "compare_and_write": false, 00:20:22.663 "abort": false, 00:20:22.663 "nvme_admin": false, 00:20:22.663 "nvme_io": false 00:20:22.663 }, 00:20:22.663 "driver_specific": { 00:20:22.663 "lvol": { 00:20:22.663 "lvol_store_uuid": "163a7699-2707-42b2-bc57-107779331483", 00:20:22.663 "base_bdev": "nvme0n1", 00:20:22.663 "thin_provision": true, 00:20:22.663 "snapshot": false, 00:20:22.663 "clone": false, 00:20:22.663 "esnap_clone": false 00:20:22.663 } 00:20:22.663 } 00:20:22.663 } 00:20:22.663 ]' 00:20:22.663 23:28:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:22.923 23:28:14 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:22.923 23:28:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:22.923 23:28:14 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:22.923 23:28:14 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:22.923 23:28:14 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:22.923 23:28:14 -- ftl/common.sh@41 -- # local base_size=5171 00:20:22.923 23:28:14 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:22.923 23:28:14 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:20:23.182 23:28:14 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:23.182 23:28:14 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:23.182 23:28:14 -- ftl/common.sh@48 -- # get_bdev_size afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.182 23:28:14 -- common/autotest_common.sh@1357 -- # local bdev_name=afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.182 23:28:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:23.182 23:28:14 -- common/autotest_common.sh@1359 -- # local bs 00:20:23.182 23:28:14 -- common/autotest_common.sh@1360 -- # local nb 00:20:23.182 23:28:14 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.182 23:28:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:23.182 { 00:20:23.182 "name": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:23.182 "aliases": [ 00:20:23.182 "lvs/nvme0n1p0" 00:20:23.182 ], 00:20:23.182 "product_name": "Logical Volume", 00:20:23.182 "block_size": 4096, 00:20:23.182 "num_blocks": 26476544, 00:20:23.182 "uuid": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:23.182 "assigned_rate_limits": { 00:20:23.182 "rw_ios_per_sec": 0, 00:20:23.182 "rw_mbytes_per_sec": 0, 00:20:23.182 "r_mbytes_per_sec": 0, 00:20:23.182 "w_mbytes_per_sec": 0 00:20:23.182 }, 00:20:23.182 "claimed": false, 00:20:23.182 "zoned": false, 00:20:23.182 "supported_io_types": { 00:20:23.182 "read": true, 00:20:23.182 "write": true, 00:20:23.182 "unmap": true, 00:20:23.182 "write_zeroes": true, 00:20:23.182 "flush": false, 00:20:23.182 "reset": true, 00:20:23.182 "compare": false, 00:20:23.182 "compare_and_write": false, 00:20:23.182 "abort": false, 00:20:23.182 "nvme_admin": false, 00:20:23.182 "nvme_io": false 00:20:23.182 }, 00:20:23.182 "driver_specific": { 00:20:23.182 "lvol": { 00:20:23.182 "lvol_store_uuid": "163a7699-2707-42b2-bc57-107779331483", 00:20:23.182 "base_bdev": "nvme0n1", 00:20:23.182 "thin_provision": true, 00:20:23.182 "snapshot": false, 00:20:23.182 "clone": false, 00:20:23.182 "esnap_clone": false 00:20:23.182 } 00:20:23.182 } 00:20:23.182 } 00:20:23.182 ]' 00:20:23.182 23:28:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:23.441 23:28:14 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:23.441 23:28:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:23.441 23:28:15 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:23.441 23:28:15 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:23.441 23:28:15 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:23.441 23:28:15 -- ftl/common.sh@48 -- # cache_size=5171 00:20:23.441 23:28:15 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:23.701 23:28:15 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:23.701 23:28:15 -- ftl/restore.sh@48 -- # get_bdev_size afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.701 23:28:15 -- common/autotest_common.sh@1357 -- # local bdev_name=afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.701 23:28:15 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:23.701 23:28:15 -- common/autotest_common.sh@1359 -- # local bs 00:20:23.701 23:28:15 -- common/autotest_common.sh@1360 -- # local nb 00:20:23.701 23:28:15 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b afff3c9a-987c-4b34-9570-8600d7a3886a 00:20:23.701 23:28:15 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:23.701 { 00:20:23.701 "name": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:23.701 "aliases": [ 00:20:23.701 "lvs/nvme0n1p0" 00:20:23.701 ], 00:20:23.701 "product_name": "Logical Volume", 00:20:23.701 "block_size": 4096, 00:20:23.701 "num_blocks": 26476544, 00:20:23.701 "uuid": "afff3c9a-987c-4b34-9570-8600d7a3886a", 00:20:23.701 "assigned_rate_limits": { 00:20:23.701 "rw_ios_per_sec": 0, 00:20:23.701 "rw_mbytes_per_sec": 0, 00:20:23.701 "r_mbytes_per_sec": 0, 00:20:23.701 "w_mbytes_per_sec": 0 00:20:23.701 }, 00:20:23.701 "claimed": false, 00:20:23.701 "zoned": false, 00:20:23.701 "supported_io_types": { 00:20:23.701 "read": true, 00:20:23.701 "write": true, 00:20:23.701 "unmap": true, 00:20:23.701 "write_zeroes": true, 00:20:23.701 "flush": false, 00:20:23.701 "reset": true, 00:20:23.701 "compare": false, 00:20:23.701 "compare_and_write": false, 00:20:23.701 "abort": false, 00:20:23.701 "nvme_admin": false, 00:20:23.701 "nvme_io": false 00:20:23.701 }, 00:20:23.701 "driver_specific": { 00:20:23.701 "lvol": { 00:20:23.701 "lvol_store_uuid": "163a7699-2707-42b2-bc57-107779331483", 00:20:23.701 "base_bdev": "nvme0n1", 00:20:23.701 "thin_provision": true, 00:20:23.701 "snapshot": false, 00:20:23.701 "clone": false, 00:20:23.701 "esnap_clone": false 00:20:23.701 } 00:20:23.701 } 00:20:23.701 } 00:20:23.701 ]' 00:20:23.702 23:28:15 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:23.702 23:28:15 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:23.702 23:28:15 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:23.962 23:28:15 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:23.962 23:28:15 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:23.962 23:28:15 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:23.962 23:28:15 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:23.963 23:28:15 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d afff3c9a-987c-4b34-9570-8600d7a3886a --l2p_dram_limit 10' 00:20:23.963 23:28:15 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:23.963 23:28:15 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:23.963 23:28:15 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:23.963 23:28:15 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:23.963 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:23.963 23:28:15 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d afff3c9a-987c-4b34-9570-8600d7a3886a --l2p_dram_limit 10 -c nvc0n1p0 00:20:23.963 [2024-07-26 23:28:15.626884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.626930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.963 [2024-07-26 23:28:15.626947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:23.963 [2024-07-26 23:28:15.626957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.627024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.627036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.963 [2024-07-26 23:28:15.627049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:23.963 [2024-07-26 23:28:15.627058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.627080] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.963 [2024-07-26 23:28:15.628097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.963 [2024-07-26 23:28:15.628132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.628142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.963 [2024-07-26 23:28:15.628155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:20:23.963 [2024-07-26 23:28:15.628165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.628233] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:20:23.963 [2024-07-26 23:28:15.629603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.629628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:23.963 [2024-07-26 23:28:15.629639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:23.963 [2024-07-26 23:28:15.629652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.637098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.637133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.963 [2024-07-26 23:28:15.637145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.420 ms 00:20:23.963 [2024-07-26 23:28:15.637157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.637244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.637260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.963 [2024-07-26 23:28:15.637271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:23.963 [2024-07-26 23:28:15.637288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.637337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.637351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.963 [2024-07-26 23:28:15.637364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:23.963 [2024-07-26 23:28:15.637376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.637401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.963 [2024-07-26 23:28:15.642557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.642589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.963 [2024-07-26 23:28:15.642603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:20:23.963 [2024-07-26 23:28:15.642614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.642651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.642662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.963 [2024-07-26 23:28:15.642674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.963 [2024-07-26 23:28:15.642684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.642722] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:23.963 [2024-07-26 23:28:15.642824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:23.963 [2024-07-26 23:28:15.642843] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.963 [2024-07-26 23:28:15.642856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:23.963 [2024-07-26 23:28:15.642871] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.963 [2024-07-26 23:28:15.642882] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.963 [2024-07-26 23:28:15.642896] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:23.963 [2024-07-26 23:28:15.642906] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.963 [2024-07-26 23:28:15.642921] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:23.963 [2024-07-26 23:28:15.642931] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:23.963 [2024-07-26 23:28:15.642942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.642952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.963 [2024-07-26 23:28:15.642986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:23.963 [2024-07-26 23:28:15.642996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.643052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.963 [2024-07-26 23:28:15.643064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.963 [2024-07-26 23:28:15.643077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:23.963 [2024-07-26 23:28:15.643087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.963 [2024-07-26 23:28:15.643154] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.963 [2024-07-26 23:28:15.643165] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.963 [2024-07-26 23:28:15.643178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.963 [2024-07-26 23:28:15.643210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643230] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.963 [2024-07-26 23:28:15.643242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.963 [2024-07-26 23:28:15.643262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.963 [2024-07-26 23:28:15.643273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:23.963 [2024-07-26 23:28:15.643286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.963 [2024-07-26 23:28:15.643295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.963 [2024-07-26 23:28:15.643305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:23.963 [2024-07-26 23:28:15.643314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.963 [2024-07-26 23:28:15.643336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:23.963 [2024-07-26 23:28:15.643347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643356] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:23.963 [2024-07-26 23:28:15.643367] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:23.963 [2024-07-26 23:28:15.643376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.963 [2024-07-26 23:28:15.643396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.963 [2024-07-26 23:28:15.643425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.963 [2024-07-26 23:28:15.643452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.963 [2024-07-26 23:28:15.643485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.963 [2024-07-26 23:28:15.643504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.963 [2024-07-26 23:28:15.643512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:23.963 [2024-07-26 23:28:15.643522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.963 [2024-07-26 23:28:15.643531] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.963 [2024-07-26 23:28:15.643542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:23.963 [2024-07-26 23:28:15.643551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.963 [2024-07-26 23:28:15.643561] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.963 [2024-07-26 23:28:15.643570] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.964 [2024-07-26 23:28:15.643581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.964 [2024-07-26 23:28:15.643591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.964 [2024-07-26 23:28:15.643603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.964 [2024-07-26 23:28:15.643611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.964 [2024-07-26 23:28:15.643622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.964 [2024-07-26 23:28:15.643632] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.964 [2024-07-26 23:28:15.643645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.964 [2024-07-26 23:28:15.643653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.964 [2024-07-26 23:28:15.643665] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.964 [2024-07-26 23:28:15.643680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.964 [2024-07-26 23:28:15.643693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:23.964 [2024-07-26 23:28:15.643703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:23.964 [2024-07-26 23:28:15.643715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:23.964 [2024-07-26 23:28:15.643725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:23.964 [2024-07-26 23:28:15.643737] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:23.964 [2024-07-26 23:28:15.643746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:23.964 [2024-07-26 23:28:15.643758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:23.964 [2024-07-26 23:28:15.643767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:23.964 [2024-07-26 23:28:15.643779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:23.964 [2024-07-26 23:28:15.643790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:23.964 [2024-07-26 23:28:15.643801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:23.964 [2024-07-26 23:28:15.643810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:23.964 [2024-07-26 23:28:15.643826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:23.964 [2024-07-26 23:28:15.643836] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.964 [2024-07-26 23:28:15.643849] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.964 [2024-07-26 23:28:15.643859] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.964 [2024-07-26 23:28:15.643870] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.964 [2024-07-26 23:28:15.643880] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.964 [2024-07-26 23:28:15.643892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.964 [2024-07-26 23:28:15.643902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.964 [2024-07-26 23:28:15.643914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.964 [2024-07-26 23:28:15.643923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:20:23.964 [2024-07-26 23:28:15.643935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.964 [2024-07-26 23:28:15.668569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.964 [2024-07-26 23:28:15.668733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.964 [2024-07-26 23:28:15.668840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.363 ms 00:20:23.964 [2024-07-26 23:28:15.668880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.964 [2024-07-26 23:28:15.668989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.964 [2024-07-26 23:28:15.669030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.964 [2024-07-26 23:28:15.669061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:23.964 [2024-07-26 23:28:15.669096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.721072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.721215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.224 [2024-07-26 23:28:15.721287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.932 ms 00:20:24.224 [2024-07-26 23:28:15.721323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.721374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.721406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.224 [2024-07-26 23:28:15.721436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.224 [2024-07-26 23:28:15.721466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.721944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.722085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.224 [2024-07-26 23:28:15.722159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:24.224 [2024-07-26 23:28:15.722198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.722393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.722490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.224 [2024-07-26 23:28:15.722528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:24.224 [2024-07-26 23:28:15.722561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.744575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.744728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.224 [2024-07-26 23:28:15.744840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.006 ms 00:20:24.224 [2024-07-26 23:28:15.744880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.758257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:24.224 [2024-07-26 23:28:15.761391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.761421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:24.224 [2024-07-26 23:28:15.761436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:20:24.224 [2024-07-26 23:28:15.761446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.858648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.224 [2024-07-26 23:28:15.858685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:24.224 [2024-07-26 23:28:15.858701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.331 ms 00:20:24.224 [2024-07-26 23:28:15.858711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.224 [2024-07-26 23:28:15.858755] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:24.224 [2024-07-26 23:28:15.858769] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:28.420 [2024-07-26 23:28:19.449393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.449456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:28.420 [2024-07-26 23:28:19.449476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3596.462 ms 00:20:28.420 [2024-07-26 23:28:19.449487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.449674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.449688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:28.420 [2024-07-26 23:28:19.449702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:20:28.420 [2024-07-26 23:28:19.449715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.486523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.486560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:28.420 [2024-07-26 23:28:19.486576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.816 ms 00:20:28.420 [2024-07-26 23:28:19.486587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.522067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.522101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:28.420 [2024-07-26 23:28:19.522120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.494 ms 00:20:28.420 [2024-07-26 23:28:19.522130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.522511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.522526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:28.420 [2024-07-26 23:28:19.522539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:28.420 [2024-07-26 23:28:19.522548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.616758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.616794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:28.420 [2024-07-26 23:28:19.616810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.312 ms 00:20:28.420 [2024-07-26 23:28:19.616821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.654177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.654221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:28.420 [2024-07-26 23:28:19.654241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.373 ms 00:20:28.420 [2024-07-26 23:28:19.654251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.656231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.656263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:28.420 [2024-07-26 23:28:19.656279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.943 ms 00:20:28.420 [2024-07-26 23:28:19.656289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.692376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.692411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:28.420 [2024-07-26 23:28:19.692426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.093 ms 00:20:28.420 [2024-07-26 23:28:19.692436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.692487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.692499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:28.420 [2024-07-26 23:28:19.692512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:28.420 [2024-07-26 23:28:19.692521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.692616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.420 [2024-07-26 23:28:19.692631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:28.420 [2024-07-26 23:28:19.692644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:28.420 [2024-07-26 23:28:19.692653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.420 [2024-07-26 23:28:19.693590] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4072.906 ms, result 0 00:20:28.420 { 00:20:28.420 "name": "ftl0", 00:20:28.420 "uuid": "ed2b9793-1462-4ab0-96b4-6bb7a0394f8d" 00:20:28.420 } 00:20:28.420 23:28:19 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:28.420 23:28:19 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:28.420 23:28:19 -- ftl/restore.sh@63 -- # echo ']}' 00:20:28.420 23:28:19 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:28.420 [2024-07-26 23:28:20.092288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.092335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.421 [2024-07-26 23:28:20.092348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:28.421 [2024-07-26 23:28:20.092360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.092383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.421 [2024-07-26 23:28:20.095880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.095909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.421 [2024-07-26 23:28:20.095923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.484 ms 00:20:28.421 [2024-07-26 23:28:20.095933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.096168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.096186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.421 [2024-07-26 23:28:20.096198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:28.421 [2024-07-26 23:28:20.096209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.098534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.098558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.421 [2024-07-26 23:28:20.098571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:20:28.421 [2024-07-26 23:28:20.098580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.103484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.103606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:28.421 [2024-07-26 23:28:20.103759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:20:28.421 [2024-07-26 23:28:20.103796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.139279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.139424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.421 [2024-07-26 23:28:20.139448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.442 ms 00:20:28.421 [2024-07-26 23:28:20.139458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.161518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.161553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.421 [2024-07-26 23:28:20.161570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.968 ms 00:20:28.421 [2024-07-26 23:28:20.161580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.421 [2024-07-26 23:28:20.161713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.421 [2024-07-26 23:28:20.161726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.421 [2024-07-26 23:28:20.161740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:28.421 [2024-07-26 23:28:20.161749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.681 [2024-07-26 23:28:20.198215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.681 [2024-07-26 23:28:20.198248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:28.681 [2024-07-26 23:28:20.198263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.497 ms 00:20:28.681 [2024-07-26 23:28:20.198272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.681 [2024-07-26 23:28:20.233506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.681 [2024-07-26 23:28:20.233540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:28.681 [2024-07-26 23:28:20.233554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.250 ms 00:20:28.681 [2024-07-26 23:28:20.233563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.681 [2024-07-26 23:28:20.268633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.681 [2024-07-26 23:28:20.268666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.681 [2024-07-26 23:28:20.268681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.085 ms 00:20:28.681 [2024-07-26 23:28:20.268690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.681 [2024-07-26 23:28:20.303858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.681 [2024-07-26 23:28:20.303891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.681 [2024-07-26 23:28:20.303905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.136 ms 00:20:28.681 [2024-07-26 23:28:20.303914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.681 [2024-07-26 23:28:20.303971] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.681 [2024-07-26 23:28:20.303988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.681 [2024-07-26 23:28:20.304367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.304992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.682 [2024-07-26 23:28:20.305167] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.682 [2024-07-26 23:28:20.305179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:20:28.682 [2024-07-26 23:28:20.305189] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.682 [2024-07-26 23:28:20.305200] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.682 [2024-07-26 23:28:20.305209] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.682 [2024-07-26 23:28:20.305220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.682 [2024-07-26 23:28:20.305229] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.682 [2024-07-26 23:28:20.305242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.682 [2024-07-26 23:28:20.305251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.682 [2024-07-26 23:28:20.305262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.682 [2024-07-26 23:28:20.305271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.682 [2024-07-26 23:28:20.305284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.682 [2024-07-26 23:28:20.305293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.682 [2024-07-26 23:28:20.305306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:20:28.682 [2024-07-26 23:28:20.305318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.682 [2024-07-26 23:28:20.323911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.682 [2024-07-26 23:28:20.323949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.682 [2024-07-26 23:28:20.323973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.575 ms 00:20:28.682 [2024-07-26 23:28:20.323983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.682 [2024-07-26 23:28:20.324174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.682 [2024-07-26 23:28:20.324187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.682 [2024-07-26 23:28:20.324204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:20:28.682 [2024-07-26 23:28:20.324213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.682 [2024-07-26 23:28:20.389591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.683 [2024-07-26 23:28:20.389623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.683 [2024-07-26 23:28:20.389637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.683 [2024-07-26 23:28:20.389648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.683 [2024-07-26 23:28:20.389701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.683 [2024-07-26 23:28:20.389712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.683 [2024-07-26 23:28:20.389728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.683 [2024-07-26 23:28:20.389738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.683 [2024-07-26 23:28:20.389811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.683 [2024-07-26 23:28:20.389824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.683 [2024-07-26 23:28:20.389836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.683 [2024-07-26 23:28:20.389846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.683 [2024-07-26 23:28:20.389866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.683 [2024-07-26 23:28:20.389877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.683 [2024-07-26 23:28:20.389888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.683 [2024-07-26 23:28:20.389900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.942 [2024-07-26 23:28:20.503791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.942 [2024-07-26 23:28:20.503834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.942 [2024-07-26 23:28:20.503849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.942 [2024-07-26 23:28:20.503860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.942 [2024-07-26 23:28:20.546802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.942 [2024-07-26 23:28:20.546837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.942 [2024-07-26 23:28:20.546851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.942 [2024-07-26 23:28:20.546863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.942 [2024-07-26 23:28:20.546932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.942 [2024-07-26 23:28:20.546944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.942 [2024-07-26 23:28:20.546956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.942 [2024-07-26 23:28:20.546978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.942 [2024-07-26 23:28:20.547027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.942 [2024-07-26 23:28:20.547040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.942 [2024-07-26 23:28:20.547053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.942 [2024-07-26 23:28:20.547062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.942 [2024-07-26 23:28:20.547168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.942 [2024-07-26 23:28:20.547181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.942 [2024-07-26 23:28:20.547194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.943 [2024-07-26 23:28:20.547204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.943 [2024-07-26 23:28:20.547244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.943 [2024-07-26 23:28:20.547256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.943 [2024-07-26 23:28:20.547268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.943 [2024-07-26 23:28:20.547278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.943 [2024-07-26 23:28:20.547320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.943 [2024-07-26 23:28:20.547332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.943 [2024-07-26 23:28:20.547343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.943 [2024-07-26 23:28:20.547352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.943 [2024-07-26 23:28:20.547399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.943 [2024-07-26 23:28:20.547411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.943 [2024-07-26 23:28:20.547423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.943 [2024-07-26 23:28:20.547433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.943 [2024-07-26 23:28:20.547560] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.969 ms, result 0 00:20:28.943 true 00:20:28.943 23:28:20 -- ftl/restore.sh@66 -- # killprocess 74345 00:20:28.943 23:28:20 -- common/autotest_common.sh@926 -- # '[' -z 74345 ']' 00:20:28.943 23:28:20 -- common/autotest_common.sh@930 -- # kill -0 74345 00:20:28.943 23:28:20 -- common/autotest_common.sh@931 -- # uname 00:20:28.943 23:28:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:28.943 23:28:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74345 00:20:28.943 23:28:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:28.943 23:28:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:28.943 killing process with pid 74345 00:20:28.943 23:28:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74345' 00:20:28.943 23:28:20 -- common/autotest_common.sh@945 -- # kill 74345 00:20:28.943 23:28:20 -- common/autotest_common.sh@950 -- # wait 74345 00:20:34.262 23:28:25 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:38.450 262144+0 records in 00:20:38.450 262144+0 records out 00:20:38.450 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.86217 s, 278 MB/s 00:20:38.450 23:28:29 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:39.828 23:28:31 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:39.828 [2024-07-26 23:28:31.226940] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:39.828 [2024-07-26 23:28:31.227082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74606 ] 00:20:39.828 [2024-07-26 23:28:31.398906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.088 [2024-07-26 23:28:31.612252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.347 [2024-07-26 23:28:31.974179] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.347 [2024-07-26 23:28:31.974241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.607 [2024-07-26 23:28:32.127026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.127068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.607 [2024-07-26 23:28:32.127082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.607 [2024-07-26 23:28:32.127092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.127137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.127149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.607 [2024-07-26 23:28:32.127160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:40.607 [2024-07-26 23:28:32.127169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.127185] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.607 [2024-07-26 23:28:32.128257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.607 [2024-07-26 23:28:32.128287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.128298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.607 [2024-07-26 23:28:32.128309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:20:40.607 [2024-07-26 23:28:32.128319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.129729] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.607 [2024-07-26 23:28:32.148556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.148593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.607 [2024-07-26 23:28:32.148611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.859 ms 00:20:40.607 [2024-07-26 23:28:32.148620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.148676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.148687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.607 [2024-07-26 23:28:32.148697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:40.607 [2024-07-26 23:28:32.148706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.155548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.155577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.607 [2024-07-26 23:28:32.155588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:20:40.607 [2024-07-26 23:28:32.155597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.607 [2024-07-26 23:28:32.155675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.607 [2024-07-26 23:28:32.155688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.607 [2024-07-26 23:28:32.155699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:40.607 [2024-07-26 23:28:32.155708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.155744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.608 [2024-07-26 23:28:32.155759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.608 [2024-07-26 23:28:32.155769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.608 [2024-07-26 23:28:32.155778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.155802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.608 [2024-07-26 23:28:32.161351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.608 [2024-07-26 23:28:32.161381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.608 [2024-07-26 23:28:32.161392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.564 ms 00:20:40.608 [2024-07-26 23:28:32.161402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.161432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.608 [2024-07-26 23:28:32.161443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.608 [2024-07-26 23:28:32.161453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.608 [2024-07-26 23:28:32.161462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.161508] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.608 [2024-07-26 23:28:32.161535] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:40.608 [2024-07-26 23:28:32.161567] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.608 [2024-07-26 23:28:32.161583] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:40.608 [2024-07-26 23:28:32.161644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:40.608 [2024-07-26 23:28:32.161656] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.608 [2024-07-26 23:28:32.161668] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:40.608 [2024-07-26 23:28:32.161680] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.608 [2024-07-26 23:28:32.161690] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.608 [2024-07-26 23:28:32.161703] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.608 [2024-07-26 23:28:32.161713] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.608 [2024-07-26 23:28:32.161722] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:40.608 [2024-07-26 23:28:32.161732] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:40.608 [2024-07-26 23:28:32.161742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.608 [2024-07-26 23:28:32.161751] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.608 [2024-07-26 23:28:32.161761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:40.608 [2024-07-26 23:28:32.161770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.161820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.608 [2024-07-26 23:28:32.161831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.608 [2024-07-26 23:28:32.161843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:40.608 [2024-07-26 23:28:32.161852] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.608 [2024-07-26 23:28:32.161908] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.608 [2024-07-26 23:28:32.161919] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.608 [2024-07-26 23:28:32.161929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.608 [2024-07-26 23:28:32.161938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.608 [2024-07-26 23:28:32.161948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.608 [2024-07-26 23:28:32.161957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.608 [2024-07-26 23:28:32.161984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.608 [2024-07-26 23:28:32.161994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.608 [2024-07-26 23:28:32.162020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.608 [2024-07-26 23:28:32.162040] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.608 [2024-07-26 23:28:32.162049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.608 [2024-07-26 23:28:32.162058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.608 [2024-07-26 23:28:32.162068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.608 [2024-07-26 23:28:32.162078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:40.608 [2024-07-26 23:28:32.162086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.608 [2024-07-26 23:28:32.162104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:40.608 [2024-07-26 23:28:32.162113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:40.608 [2024-07-26 23:28:32.162131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:40.608 [2024-07-26 23:28:32.162150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.608 [2024-07-26 23:28:32.162169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.608 [2024-07-26 23:28:32.162196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.608 [2024-07-26 23:28:32.162221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.608 [2024-07-26 23:28:32.162247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.608 [2024-07-26 23:28:32.162273] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.608 [2024-07-26 23:28:32.162291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.608 [2024-07-26 23:28:32.162300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:40.608 [2024-07-26 23:28:32.162309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.608 [2024-07-26 23:28:32.162319] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.608 [2024-07-26 23:28:32.162329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.608 [2024-07-26 23:28:32.162338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.608 [2024-07-26 23:28:32.162360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.608 [2024-07-26 23:28:32.162369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.608 [2024-07-26 23:28:32.162379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.608 [2024-07-26 23:28:32.162388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.608 [2024-07-26 23:28:32.162397] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.608 [2024-07-26 23:28:32.162406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.608 [2024-07-26 23:28:32.162416] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.608 [2024-07-26 23:28:32.162427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.608 [2024-07-26 23:28:32.162439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.608 [2024-07-26 23:28:32.162448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:40.608 [2024-07-26 23:28:32.162458] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:40.608 [2024-07-26 23:28:32.162469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:40.608 [2024-07-26 23:28:32.162480] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:40.608 [2024-07-26 23:28:32.162491] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:40.608 [2024-07-26 23:28:32.162500] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:40.608 [2024-07-26 23:28:32.162510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:40.608 [2024-07-26 23:28:32.162521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:40.608 [2024-07-26 23:28:32.162530] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:40.608 [2024-07-26 23:28:32.162540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:40.608 [2024-07-26 23:28:32.162549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:40.609 [2024-07-26 23:28:32.162560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:40.609 [2024-07-26 23:28:32.162569] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.609 [2024-07-26 23:28:32.162580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.609 [2024-07-26 23:28:32.162590] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.609 [2024-07-26 23:28:32.162600] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.609 [2024-07-26 23:28:32.162610] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.609 [2024-07-26 23:28:32.162620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.609 [2024-07-26 23:28:32.162629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.162639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.609 [2024-07-26 23:28:32.162649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:20:40.609 [2024-07-26 23:28:32.162660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.185827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.185859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.609 [2024-07-26 23:28:32.185872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.169 ms 00:20:40.609 [2024-07-26 23:28:32.185881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.185952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.185976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.609 [2024-07-26 23:28:32.185986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:40.609 [2024-07-26 23:28:32.185996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.258437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.258469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.609 [2024-07-26 23:28:32.258481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.512 ms 00:20:40.609 [2024-07-26 23:28:32.258494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.258525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.258535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.609 [2024-07-26 23:28:32.258545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:40.609 [2024-07-26 23:28:32.258554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.259035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.259056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.609 [2024-07-26 23:28:32.259067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:20:40.609 [2024-07-26 23:28:32.259077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.259182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.259195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.609 [2024-07-26 23:28:32.259205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:40.609 [2024-07-26 23:28:32.259214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.281421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.281454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.609 [2024-07-26 23:28:32.281466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.222 ms 00:20:40.609 [2024-07-26 23:28:32.281475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.299871] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:40.609 [2024-07-26 23:28:32.299909] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.609 [2024-07-26 23:28:32.299922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.299933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.609 [2024-07-26 23:28:32.299959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.385 ms 00:20:40.609 [2024-07-26 23:28:32.299976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.328258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.328300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.609 [2024-07-26 23:28:32.328316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.289 ms 00:20:40.609 [2024-07-26 23:28:32.328326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.609 [2024-07-26 23:28:32.346297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.609 [2024-07-26 23:28:32.346332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.609 [2024-07-26 23:28:32.346344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.958 ms 00:20:40.609 [2024-07-26 23:28:32.346353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.364307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.364340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.869 [2024-07-26 23:28:32.364352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.949 ms 00:20:40.869 [2024-07-26 23:28:32.364362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.364801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.364827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.869 [2024-07-26 23:28:32.364839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:20:40.869 [2024-07-26 23:28:32.364849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.452721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.452764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.869 [2024-07-26 23:28:32.452777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.996 ms 00:20:40.869 [2024-07-26 23:28:32.452787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.464293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.869 [2024-07-26 23:28:32.466545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.466573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.869 [2024-07-26 23:28:32.466585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.737 ms 00:20:40.869 [2024-07-26 23:28:32.466595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.466653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.466665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.869 [2024-07-26 23:28:32.466679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.869 [2024-07-26 23:28:32.466688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.466747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.466760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.869 [2024-07-26 23:28:32.466770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:40.869 [2024-07-26 23:28:32.466779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.468830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.468860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:40.869 [2024-07-26 23:28:32.468871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.038 ms 00:20:40.869 [2024-07-26 23:28:32.468885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.869 [2024-07-26 23:28:32.468918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.869 [2024-07-26 23:28:32.468929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.869 [2024-07-26 23:28:32.468939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:40.870 [2024-07-26 23:28:32.468949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.870 [2024-07-26 23:28:32.469012] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.870 [2024-07-26 23:28:32.469024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.870 [2024-07-26 23:28:32.469034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.870 [2024-07-26 23:28:32.469044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:40.870 [2024-07-26 23:28:32.469053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.870 [2024-07-26 23:28:32.505187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.870 [2024-07-26 23:28:32.505232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.870 [2024-07-26 23:28:32.505245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.169 ms 00:20:40.870 [2024-07-26 23:28:32.505255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.870 [2024-07-26 23:28:32.505334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.870 [2024-07-26 23:28:32.505348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.870 [2024-07-26 23:28:32.505358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:40.870 [2024-07-26 23:28:32.505375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.870 [2024-07-26 23:28:32.506402] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.537 ms, result 0 00:21:23.890  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (22 MBps) Copying: 236/1024 [MB] (24 MBps) Copying: 261/1024 [MB] (24 MBps) Copying: 284/1024 [MB] (23 MBps) Copying: 308/1024 [MB] (23 MBps) Copying: 332/1024 [MB] (24 MBps) Copying: 356/1024 [MB] (24 MBps) Copying: 380/1024 [MB] (24 MBps) Copying: 404/1024 [MB] (23 MBps) Copying: 428/1024 [MB] (23 MBps) Copying: 452/1024 [MB] (23 MBps) Copying: 474/1024 [MB] (22 MBps) Copying: 497/1024 [MB] (22 MBps) Copying: 520/1024 [MB] (23 MBps) Copying: 543/1024 [MB] (22 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 591/1024 [MB] (24 MBps) Copying: 614/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (24 MBps) Copying: 663/1024 [MB] (25 MBps) Copying: 688/1024 [MB] (24 MBps) Copying: 712/1024 [MB] (23 MBps) Copying: 735/1024 [MB] (22 MBps) Copying: 758/1024 [MB] (23 MBps) Copying: 782/1024 [MB] (23 MBps) Copying: 805/1024 [MB] (22 MBps) Copying: 829/1024 [MB] (24 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 877/1024 [MB] (24 MBps) Copying: 902/1024 [MB] (24 MBps) Copying: 925/1024 [MB] (23 MBps) Copying: 949/1024 [MB] (24 MBps) Copying: 973/1024 [MB] (23 MBps) Copying: 995/1024 [MB] (22 MBps) Copying: 1019/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 23:29:15.629900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.890 [2024-07-26 23:29:15.629956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:23.890 [2024-07-26 23:29:15.629988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.890 [2024-07-26 23:29:15.629999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.890 [2024-07-26 23:29:15.630035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:23.890 [2024-07-26 23:29:15.634168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.890 [2024-07-26 23:29:15.634203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:23.890 [2024-07-26 23:29:15.634232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.120 ms 00:21:23.890 [2024-07-26 23:29:15.634242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.890 [2024-07-26 23:29:15.636159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.890 [2024-07-26 23:29:15.636197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:23.890 [2024-07-26 23:29:15.636209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:21:23.890 [2024-07-26 23:29:15.636220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.654041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.654079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:24.150 [2024-07-26 23:29:15.654092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.833 ms 00:21:24.150 [2024-07-26 23:29:15.654101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.658996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.659032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:24.150 [2024-07-26 23:29:15.659044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.855 ms 00:21:24.150 [2024-07-26 23:29:15.659053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.697856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.697895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:24.150 [2024-07-26 23:29:15.697908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.790 ms 00:21:24.150 [2024-07-26 23:29:15.697918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.719391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.719427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:24.150 [2024-07-26 23:29:15.719439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.441 ms 00:21:24.150 [2024-07-26 23:29:15.719449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.719604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.719617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:24.150 [2024-07-26 23:29:15.719635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:24.150 [2024-07-26 23:29:15.719644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.756034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.756068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:24.150 [2024-07-26 23:29:15.756080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.434 ms 00:21:24.150 [2024-07-26 23:29:15.756090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.792207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.792241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:24.150 [2024-07-26 23:29:15.792269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.125 ms 00:21:24.150 [2024-07-26 23:29:15.792279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.827773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.827808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:24.150 [2024-07-26 23:29:15.827836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.518 ms 00:21:24.150 [2024-07-26 23:29:15.827845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.862725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.150 [2024-07-26 23:29:15.862759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:24.150 [2024-07-26 23:29:15.862771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.852 ms 00:21:24.150 [2024-07-26 23:29:15.862780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.150 [2024-07-26 23:29:15.862830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:24.150 [2024-07-26 23:29:15.862847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.862993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:24.150 [2024-07-26 23:29:15.863103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:24.151 [2024-07-26 23:29:15.863913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:24.151 [2024-07-26 23:29:15.863922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:21:24.151 [2024-07-26 23:29:15.863933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:24.151 [2024-07-26 23:29:15.863949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:24.151 [2024-07-26 23:29:15.863974] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:24.151 [2024-07-26 23:29:15.863984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:24.151 [2024-07-26 23:29:15.863993] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:24.151 [2024-07-26 23:29:15.864003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:24.151 [2024-07-26 23:29:15.864013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:24.151 [2024-07-26 23:29:15.864022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:24.151 [2024-07-26 23:29:15.864031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:24.151 [2024-07-26 23:29:15.864040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.151 [2024-07-26 23:29:15.864050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:24.151 [2024-07-26 23:29:15.864061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:21:24.151 [2024-07-26 23:29:15.864070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.151 [2024-07-26 23:29:15.883768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.152 [2024-07-26 23:29:15.883800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:24.152 [2024-07-26 23:29:15.883813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:21:24.152 [2024-07-26 23:29:15.883822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.152 [2024-07-26 23:29:15.884141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.152 [2024-07-26 23:29:15.884152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:24.152 [2024-07-26 23:29:15.884163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:24.152 [2024-07-26 23:29:15.884188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:15.939034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:15.939071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.410 [2024-07-26 23:29:15.939083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.410 [2024-07-26 23:29:15.939093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:15.939172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:15.939183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.410 [2024-07-26 23:29:15.939193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.410 [2024-07-26 23:29:15.939203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:15.939282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:15.939295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.410 [2024-07-26 23:29:15.939305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.410 [2024-07-26 23:29:15.939314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:15.939332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:15.939342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.410 [2024-07-26 23:29:15.939351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.410 [2024-07-26 23:29:15.939362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:16.055943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:16.056005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.410 [2024-07-26 23:29:16.056020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.410 [2024-07-26 23:29:16.056030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.410 [2024-07-26 23:29:16.101185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.410 [2024-07-26 23:29:16.101223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.411 [2024-07-26 23:29:16.101235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.411 [2024-07-26 23:29:16.101369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.411 [2024-07-26 23:29:16.101449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.411 [2024-07-26 23:29:16.101627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:24.411 [2024-07-26 23:29:16.101702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.411 [2024-07-26 23:29:16.101784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.101845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.411 [2024-07-26 23:29:16.101857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.411 [2024-07-26 23:29:16.101867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.411 [2024-07-26 23:29:16.101877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.411 [2024-07-26 23:29:16.102040] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.842 ms, result 0 00:21:25.787 00:21:25.787 00:21:25.787 23:29:17 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:26.046 [2024-07-26 23:29:17.575754] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:26.046 [2024-07-26 23:29:17.575887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75079 ] 00:21:26.046 [2024-07-26 23:29:17.748646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.305 [2024-07-26 23:29:17.956736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.874 [2024-07-26 23:29:18.351422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.874 [2024-07-26 23:29:18.351490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.874 [2024-07-26 23:29:18.513867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.513918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:26.874 [2024-07-26 23:29:18.513935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:26.874 [2024-07-26 23:29:18.513945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.514009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.514021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.874 [2024-07-26 23:29:18.514032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:26.874 [2024-07-26 23:29:18.514041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.514064] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:26.874 [2024-07-26 23:29:18.515152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:26.874 [2024-07-26 23:29:18.515180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.515190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.874 [2024-07-26 23:29:18.515201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:21:26.874 [2024-07-26 23:29:18.515211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.517616] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:26.874 [2024-07-26 23:29:18.536128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.536167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:26.874 [2024-07-26 23:29:18.536186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.542 ms 00:21:26.874 [2024-07-26 23:29:18.536196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.536258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.536270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:26.874 [2024-07-26 23:29:18.536281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:26.874 [2024-07-26 23:29:18.536290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.548081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.548114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.874 [2024-07-26 23:29:18.548127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.742 ms 00:21:26.874 [2024-07-26 23:29:18.548138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.548234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.548249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.874 [2024-07-26 23:29:18.548261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:26.874 [2024-07-26 23:29:18.548270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.548323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.548339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:26.874 [2024-07-26 23:29:18.548349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:26.874 [2024-07-26 23:29:18.548358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.548387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.874 [2024-07-26 23:29:18.554565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.554597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.874 [2024-07-26 23:29:18.554609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:21:26.874 [2024-07-26 23:29:18.554619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.554653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.554663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:26.874 [2024-07-26 23:29:18.554674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:26.874 [2024-07-26 23:29:18.554683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.554717] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:26.874 [2024-07-26 23:29:18.554747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:26.874 [2024-07-26 23:29:18.554782] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:26.874 [2024-07-26 23:29:18.554803] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:26.874 [2024-07-26 23:29:18.554868] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:26.874 [2024-07-26 23:29:18.554891] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:26.874 [2024-07-26 23:29:18.554904] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:26.874 [2024-07-26 23:29:18.554916] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:26.874 [2024-07-26 23:29:18.554928] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:26.874 [2024-07-26 23:29:18.554942] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:26.874 [2024-07-26 23:29:18.554953] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:26.874 [2024-07-26 23:29:18.554991] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:26.874 [2024-07-26 23:29:18.555001] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:26.874 [2024-07-26 23:29:18.555011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.555022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:26.874 [2024-07-26 23:29:18.555033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:21:26.874 [2024-07-26 23:29:18.555043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.555100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.874 [2024-07-26 23:29:18.555111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:26.874 [2024-07-26 23:29:18.555124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:26.874 [2024-07-26 23:29:18.555134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.874 [2024-07-26 23:29:18.555199] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:26.874 [2024-07-26 23:29:18.555211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:26.874 [2024-07-26 23:29:18.555222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.874 [2024-07-26 23:29:18.555232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.874 [2024-07-26 23:29:18.555242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:26.874 [2024-07-26 23:29:18.555251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:26.874 [2024-07-26 23:29:18.555261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:26.874 [2024-07-26 23:29:18.555270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:26.874 [2024-07-26 23:29:18.555280] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:26.874 [2024-07-26 23:29:18.555290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.874 [2024-07-26 23:29:18.555300] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:26.874 [2024-07-26 23:29:18.555309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:26.874 [2024-07-26 23:29:18.555318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.875 [2024-07-26 23:29:18.555327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:26.875 [2024-07-26 23:29:18.555337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:26.875 [2024-07-26 23:29:18.555346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:26.875 [2024-07-26 23:29:18.555364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:26.875 [2024-07-26 23:29:18.555373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:26.875 [2024-07-26 23:29:18.555391] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:26.875 [2024-07-26 23:29:18.555411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:26.875 [2024-07-26 23:29:18.555430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555448] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:26.875 [2024-07-26 23:29:18.555458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:26.875 [2024-07-26 23:29:18.555485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:26.875 [2024-07-26 23:29:18.555512] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:26.875 [2024-07-26 23:29:18.555539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.875 [2024-07-26 23:29:18.555557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:26.875 [2024-07-26 23:29:18.555567] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:26.875 [2024-07-26 23:29:18.555575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.875 [2024-07-26 23:29:18.555584] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:26.875 [2024-07-26 23:29:18.555595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:26.875 [2024-07-26 23:29:18.555605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.875 [2024-07-26 23:29:18.555629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:26.875 [2024-07-26 23:29:18.555638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:26.875 [2024-07-26 23:29:18.555647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:26.875 [2024-07-26 23:29:18.555656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:26.875 [2024-07-26 23:29:18.555664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:26.875 [2024-07-26 23:29:18.555673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:26.875 [2024-07-26 23:29:18.555684] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:26.875 [2024-07-26 23:29:18.555696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.875 [2024-07-26 23:29:18.555708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:26.875 [2024-07-26 23:29:18.555719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:26.875 [2024-07-26 23:29:18.555730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:26.875 [2024-07-26 23:29:18.555740] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:26.875 [2024-07-26 23:29:18.555751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:26.875 [2024-07-26 23:29:18.555762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:26.875 [2024-07-26 23:29:18.555773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:26.875 [2024-07-26 23:29:18.555783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:26.875 [2024-07-26 23:29:18.555793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:26.875 [2024-07-26 23:29:18.555803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:26.875 [2024-07-26 23:29:18.555813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:26.875 [2024-07-26 23:29:18.555823] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:26.875 [2024-07-26 23:29:18.555833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:26.875 [2024-07-26 23:29:18.555843] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:26.875 [2024-07-26 23:29:18.555854] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.875 [2024-07-26 23:29:18.555865] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:26.875 [2024-07-26 23:29:18.555876] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:26.875 [2024-07-26 23:29:18.555886] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:26.875 [2024-07-26 23:29:18.555896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:26.875 [2024-07-26 23:29:18.555906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.875 [2024-07-26 23:29:18.555915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:26.875 [2024-07-26 23:29:18.555927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:26.875 [2024-07-26 23:29:18.555938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.875 [2024-07-26 23:29:18.585455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.875 [2024-07-26 23:29:18.585488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.875 [2024-07-26 23:29:18.585500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.486 ms 00:21:26.875 [2024-07-26 23:29:18.585510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.875 [2024-07-26 23:29:18.585590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.875 [2024-07-26 23:29:18.585605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:26.875 [2024-07-26 23:29:18.585615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:26.875 [2024-07-26 23:29:18.585625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.134 [2024-07-26 23:29:18.665134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.134 [2024-07-26 23:29:18.665171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.134 [2024-07-26 23:29:18.665184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.588 ms 00:21:27.134 [2024-07-26 23:29:18.665199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.134 [2024-07-26 23:29:18.665248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.134 [2024-07-26 23:29:18.665259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.134 [2024-07-26 23:29:18.665270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:27.134 [2024-07-26 23:29:18.665280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.134 [2024-07-26 23:29:18.666068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.134 [2024-07-26 23:29:18.666088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.134 [2024-07-26 23:29:18.666100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:21:27.135 [2024-07-26 23:29:18.666110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.666227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.666240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.135 [2024-07-26 23:29:18.666251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:27.135 [2024-07-26 23:29:18.666261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.691335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.691372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.135 [2024-07-26 23:29:18.691385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.090 ms 00:21:27.135 [2024-07-26 23:29:18.691396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.711272] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:27.135 [2024-07-26 23:29:18.711309] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:27.135 [2024-07-26 23:29:18.711324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.711335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:27.135 [2024-07-26 23:29:18.711346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.849 ms 00:21:27.135 [2024-07-26 23:29:18.711355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.740050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.740087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:27.135 [2024-07-26 23:29:18.740101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.697 ms 00:21:27.135 [2024-07-26 23:29:18.740127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.757863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.757901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:27.135 [2024-07-26 23:29:18.757913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.705 ms 00:21:27.135 [2024-07-26 23:29:18.757923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.775335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.775369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:27.135 [2024-07-26 23:29:18.775381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.403 ms 00:21:27.135 [2024-07-26 23:29:18.775390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.775862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.775885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:27.135 [2024-07-26 23:29:18.775897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:21:27.135 [2024-07-26 23:29:18.775907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.869356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.869407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:27.135 [2024-07-26 23:29:18.869423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.580 ms 00:21:27.135 [2024-07-26 23:29:18.869434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.880865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:27.135 [2024-07-26 23:29:18.884396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.884429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:27.135 [2024-07-26 23:29:18.884444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.940 ms 00:21:27.135 [2024-07-26 23:29:18.884455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.884533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.884550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:27.135 [2024-07-26 23:29:18.884564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:27.135 [2024-07-26 23:29:18.884575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.884651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.884663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:27.135 [2024-07-26 23:29:18.884674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:27.135 [2024-07-26 23:29:18.884684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.887078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.887106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:27.135 [2024-07-26 23:29:18.887122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:21:27.135 [2024-07-26 23:29:18.887132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.887164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.887176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:27.135 [2024-07-26 23:29:18.887186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:27.135 [2024-07-26 23:29:18.887202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.135 [2024-07-26 23:29:18.887247] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:27.135 [2024-07-26 23:29:18.887260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.135 [2024-07-26 23:29:18.887270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:27.135 [2024-07-26 23:29:18.887280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:27.135 [2024-07-26 23:29:18.887294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.394 [2024-07-26 23:29:18.924447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.394 [2024-07-26 23:29:18.924486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:27.394 [2024-07-26 23:29:18.924499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.193 ms 00:21:27.394 [2024-07-26 23:29:18.924526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.394 [2024-07-26 23:29:18.924605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.394 [2024-07-26 23:29:18.924624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:27.394 [2024-07-26 23:29:18.924636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:27.394 [2024-07-26 23:29:18.924646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.394 [2024-07-26 23:29:18.926142] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.402 ms, result 0 00:22:11.990  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (25 MBps) Copying: 147/1024 [MB] (23 MBps) Copying: 171/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (23 MBps) Copying: 217/1024 [MB] (22 MBps) Copying: 239/1024 [MB] (22 MBps) Copying: 262/1024 [MB] (22 MBps) Copying: 284/1024 [MB] (22 MBps) Copying: 306/1024 [MB] (22 MBps) Copying: 329/1024 [MB] (22 MBps) Copying: 352/1024 [MB] (22 MBps) Copying: 374/1024 [MB] (22 MBps) Copying: 397/1024 [MB] (22 MBps) Copying: 420/1024 [MB] (22 MBps) Copying: 443/1024 [MB] (22 MBps) Copying: 466/1024 [MB] (23 MBps) Copying: 489/1024 [MB] (22 MBps) Copying: 512/1024 [MB] (23 MBps) Copying: 535/1024 [MB] (23 MBps) Copying: 558/1024 [MB] (22 MBps) Copying: 580/1024 [MB] (22 MBps) Copying: 603/1024 [MB] (22 MBps) Copying: 625/1024 [MB] (22 MBps) Copying: 648/1024 [MB] (22 MBps) Copying: 671/1024 [MB] (22 MBps) Copying: 694/1024 [MB] (23 MBps) Copying: 717/1024 [MB] (23 MBps) Copying: 741/1024 [MB] (23 MBps) Copying: 764/1024 [MB] (23 MBps) Copying: 787/1024 [MB] (22 MBps) Copying: 810/1024 [MB] (23 MBps) Copying: 833/1024 [MB] (22 MBps) Copying: 855/1024 [MB] (22 MBps) Copying: 878/1024 [MB] (22 MBps) Copying: 900/1024 [MB] (22 MBps) Copying: 922/1024 [MB] (22 MBps) Copying: 945/1024 [MB] (22 MBps) Copying: 967/1024 [MB] (22 MBps) Copying: 990/1024 [MB] (23 MBps) Copying: 1013/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 23:30:03.692100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.990 [2024-07-26 23:30:03.692217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:11.990 [2024-07-26 23:30:03.692270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:11.990 [2024-07-26 23:30:03.692308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.990 [2024-07-26 23:30:03.692385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:11.990 [2024-07-26 23:30:03.703834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.990 [2024-07-26 23:30:03.703930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:11.990 [2024-07-26 23:30:03.704016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.799 ms 00:22:11.990 [2024-07-26 23:30:03.704075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.990 [2024-07-26 23:30:03.704712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.990 [2024-07-26 23:30:03.704769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:11.990 [2024-07-26 23:30:03.704796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:22:11.990 [2024-07-26 23:30:03.704821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.990 [2024-07-26 23:30:03.710684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.990 [2024-07-26 23:30:03.710759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:11.990 [2024-07-26 23:30:03.710787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.836 ms 00:22:11.990 [2024-07-26 23:30:03.710811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.990 [2024-07-26 23:30:03.719585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.990 [2024-07-26 23:30:03.719635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:11.990 [2024-07-26 23:30:03.719653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.734 ms 00:22:11.990 [2024-07-26 23:30:03.719669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.758058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.758099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.250 [2024-07-26 23:30:03.758114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.351 ms 00:22:12.250 [2024-07-26 23:30:03.758124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.782309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.782350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.250 [2024-07-26 23:30:03.782365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.183 ms 00:22:12.250 [2024-07-26 23:30:03.782376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.782508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.782530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.250 [2024-07-26 23:30:03.782542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:12.250 [2024-07-26 23:30:03.782552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.820280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.820322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:12.250 [2024-07-26 23:30:03.820336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.771 ms 00:22:12.250 [2024-07-26 23:30:03.820346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.855792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.855833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:12.250 [2024-07-26 23:30:03.855847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.465 ms 00:22:12.250 [2024-07-26 23:30:03.855858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.891773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.891813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:12.250 [2024-07-26 23:30:03.891827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.935 ms 00:22:12.250 [2024-07-26 23:30:03.891837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.928361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.250 [2024-07-26 23:30:03.928405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:12.250 [2024-07-26 23:30:03.928420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.492 ms 00:22:12.250 [2024-07-26 23:30:03.928431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.250 [2024-07-26 23:30:03.928470] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:12.250 [2024-07-26 23:30:03.928490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:12.250 [2024-07-26 23:30:03.928831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.928990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:12.251 [2024-07-26 23:30:03.929725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:12.251 [2024-07-26 23:30:03.929737] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:22:12.251 [2024-07-26 23:30:03.929756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:12.251 [2024-07-26 23:30:03.929768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:12.251 [2024-07-26 23:30:03.929778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:12.251 [2024-07-26 23:30:03.929790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:12.252 [2024-07-26 23:30:03.929802] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:12.252 [2024-07-26 23:30:03.929813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:12.252 [2024-07-26 23:30:03.929824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:12.252 [2024-07-26 23:30:03.929834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:12.252 [2024-07-26 23:30:03.929851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:12.252 [2024-07-26 23:30:03.929863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.252 [2024-07-26 23:30:03.929874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:12.252 [2024-07-26 23:30:03.929885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:22:12.252 [2024-07-26 23:30:03.929909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:03.949187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.252 [2024-07-26 23:30:03.949226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:12.252 [2024-07-26 23:30:03.949256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.241 ms 00:22:12.252 [2024-07-26 23:30:03.949268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:03.949506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.252 [2024-07-26 23:30:03.949519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:12.252 [2024-07-26 23:30:03.949530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:22:12.252 [2024-07-26 23:30:03.949548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:04.002360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.252 [2024-07-26 23:30:04.002401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.252 [2024-07-26 23:30:04.002416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.252 [2024-07-26 23:30:04.002429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:04.002483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.252 [2024-07-26 23:30:04.002496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.252 [2024-07-26 23:30:04.002507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.252 [2024-07-26 23:30:04.002525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:04.002604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.252 [2024-07-26 23:30:04.002619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.252 [2024-07-26 23:30:04.002630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.252 [2024-07-26 23:30:04.002641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.252 [2024-07-26 23:30:04.002660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.252 [2024-07-26 23:30:04.002688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.252 [2024-07-26 23:30:04.002700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.252 [2024-07-26 23:30:04.002712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.113489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.113540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.512 [2024-07-26 23:30:04.113555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.113567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.155816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.155858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.512 [2024-07-26 23:30:04.155872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.155883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.155993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.512 [2024-07-26 23:30:04.156037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.512 [2024-07-26 23:30:04.156127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.512 [2024-07-26 23:30:04.156293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.512 [2024-07-26 23:30:04.156378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.512 [2024-07-26 23:30:04.156461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.512 [2024-07-26 23:30:04.156532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.512 [2024-07-26 23:30:04.156544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.512 [2024-07-26 23:30:04.156555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.512 [2024-07-26 23:30:04.156682] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.232 ms, result 0 00:22:13.909 00:22:13.909 00:22:13.909 23:30:05 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:15.816 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:15.816 23:30:07 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:15.816 [2024-07-26 23:30:07.226799] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:15.816 [2024-07-26 23:30:07.226917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75584 ] 00:22:15.816 [2024-07-26 23:30:07.399415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.075 [2024-07-26 23:30:07.674757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.645 [2024-07-26 23:30:08.108199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.645 [2024-07-26 23:30:08.108286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.645 [2024-07-26 23:30:08.266923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.266999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.645 [2024-07-26 23:30:08.267016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.645 [2024-07-26 23:30:08.267027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.267080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.267092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.645 [2024-07-26 23:30:08.267103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:16.645 [2024-07-26 23:30:08.267113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.267134] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.645 [2024-07-26 23:30:08.268279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.645 [2024-07-26 23:30:08.268304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.268315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.645 [2024-07-26 23:30:08.268326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:22:16.645 [2024-07-26 23:30:08.268336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.270615] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.645 [2024-07-26 23:30:08.289591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.289635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.645 [2024-07-26 23:30:08.289656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.007 ms 00:22:16.645 [2024-07-26 23:30:08.289666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.289724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.289736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.645 [2024-07-26 23:30:08.289747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:16.645 [2024-07-26 23:30:08.289756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.301339] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.301364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.645 [2024-07-26 23:30:08.301376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.537 ms 00:22:16.645 [2024-07-26 23:30:08.301386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.301476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.301489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.645 [2024-07-26 23:30:08.301500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:16.645 [2024-07-26 23:30:08.301510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.301561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.301577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.645 [2024-07-26 23:30:08.301588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.645 [2024-07-26 23:30:08.301597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.301625] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.645 [2024-07-26 23:30:08.308047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.308073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.645 [2024-07-26 23:30:08.308085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.442 ms 00:22:16.645 [2024-07-26 23:30:08.308112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.308148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.308159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.645 [2024-07-26 23:30:08.308170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:16.645 [2024-07-26 23:30:08.308180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.308213] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.645 [2024-07-26 23:30:08.308244] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:16.645 [2024-07-26 23:30:08.308282] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.645 [2024-07-26 23:30:08.308300] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:16.645 [2024-07-26 23:30:08.308367] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:16.645 [2024-07-26 23:30:08.308381] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.645 [2024-07-26 23:30:08.308394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:16.645 [2024-07-26 23:30:08.308408] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.645 [2024-07-26 23:30:08.308420] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.645 [2024-07-26 23:30:08.308435] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:16.645 [2024-07-26 23:30:08.308446] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.645 [2024-07-26 23:30:08.308456] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:16.645 [2024-07-26 23:30:08.308466] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:16.645 [2024-07-26 23:30:08.308477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.308487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.645 [2024-07-26 23:30:08.308497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:16.645 [2024-07-26 23:30:08.308507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.308561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.645 [2024-07-26 23:30:08.308572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.645 [2024-07-26 23:30:08.308586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:16.645 [2024-07-26 23:30:08.308596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.645 [2024-07-26 23:30:08.308662] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.646 [2024-07-26 23:30:08.308675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.646 [2024-07-26 23:30:08.308686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.646 [2024-07-26 23:30:08.308716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308736] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.646 [2024-07-26 23:30:08.308746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.646 [2024-07-26 23:30:08.308766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.646 [2024-07-26 23:30:08.308775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:16.646 [2024-07-26 23:30:08.308784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.646 [2024-07-26 23:30:08.308793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.646 [2024-07-26 23:30:08.308802] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:16.646 [2024-07-26 23:30:08.308811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.646 [2024-07-26 23:30:08.308829] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:16.646 [2024-07-26 23:30:08.308839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:16.646 [2024-07-26 23:30:08.308856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:16.646 [2024-07-26 23:30:08.308877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.646 [2024-07-26 23:30:08.308895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308913] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.646 [2024-07-26 23:30:08.308922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.646 [2024-07-26 23:30:08.308949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.646 [2024-07-26 23:30:08.308967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.646 [2024-07-26 23:30:08.308976] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:16.646 [2024-07-26 23:30:08.308996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:16.646 [2024-07-26 23:30:08.309005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.646 [2024-07-26 23:30:08.309015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:16.646 [2024-07-26 23:30:08.309023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.646 [2024-07-26 23:30:08.309032] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.646 [2024-07-26 23:30:08.309041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:16.646 [2024-07-26 23:30:08.309050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.646 [2024-07-26 23:30:08.309059] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.646 [2024-07-26 23:30:08.309069] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.646 [2024-07-26 23:30:08.309092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.646 [2024-07-26 23:30:08.309107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.646 [2024-07-26 23:30:08.309117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.646 [2024-07-26 23:30:08.309126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.646 [2024-07-26 23:30:08.309134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.646 [2024-07-26 23:30:08.309144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.646 [2024-07-26 23:30:08.309152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.646 [2024-07-26 23:30:08.309162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.646 [2024-07-26 23:30:08.309171] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.646 [2024-07-26 23:30:08.309183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.646 [2024-07-26 23:30:08.309193] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:16.646 [2024-07-26 23:30:08.309203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:16.646 [2024-07-26 23:30:08.309212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:16.646 [2024-07-26 23:30:08.309222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:16.646 [2024-07-26 23:30:08.309232] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:16.646 [2024-07-26 23:30:08.309242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:16.646 [2024-07-26 23:30:08.309252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:16.646 [2024-07-26 23:30:08.309261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:16.646 [2024-07-26 23:30:08.309271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:16.646 [2024-07-26 23:30:08.309281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:16.646 [2024-07-26 23:30:08.309290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:16.646 [2024-07-26 23:30:08.309301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:16.646 [2024-07-26 23:30:08.309310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:16.646 [2024-07-26 23:30:08.309320] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.646 [2024-07-26 23:30:08.309331] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.646 [2024-07-26 23:30:08.309341] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.646 [2024-07-26 23:30:08.309351] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.646 [2024-07-26 23:30:08.309361] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.646 [2024-07-26 23:30:08.309371] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.646 [2024-07-26 23:30:08.309380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.646 [2024-07-26 23:30:08.309390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.646 [2024-07-26 23:30:08.309400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:22:16.646 [2024-07-26 23:30:08.309410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.646 [2024-07-26 23:30:08.338617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.646 [2024-07-26 23:30:08.338644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.646 [2024-07-26 23:30:08.338657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.212 ms 00:22:16.646 [2024-07-26 23:30:08.338668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.646 [2024-07-26 23:30:08.338747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.646 [2024-07-26 23:30:08.338761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.646 [2024-07-26 23:30:08.338772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:16.646 [2024-07-26 23:30:08.338782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.421171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.421205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.906 [2024-07-26 23:30:08.421234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.475 ms 00:22:16.906 [2024-07-26 23:30:08.421249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.421299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.421311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.906 [2024-07-26 23:30:08.421322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:16.906 [2024-07-26 23:30:08.421332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.422115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.422130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.906 [2024-07-26 23:30:08.422141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:22:16.906 [2024-07-26 23:30:08.422151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.422290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.422303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.906 [2024-07-26 23:30:08.422313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:16.906 [2024-07-26 23:30:08.422339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.448092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.448122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.906 [2024-07-26 23:30:08.448151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.772 ms 00:22:16.906 [2024-07-26 23:30:08.448161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.468314] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:16.906 [2024-07-26 23:30:08.468346] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.906 [2024-07-26 23:30:08.468377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.468388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.906 [2024-07-26 23:30:08.468401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.138 ms 00:22:16.906 [2024-07-26 23:30:08.468411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.497782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.497825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.906 [2024-07-26 23:30:08.497839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.375 ms 00:22:16.906 [2024-07-26 23:30:08.497849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.515408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.515438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.906 [2024-07-26 23:30:08.515450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:22:16.906 [2024-07-26 23:30:08.515475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.533624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.533652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.906 [2024-07-26 23:30:08.533664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:22:16.906 [2024-07-26 23:30:08.533673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.534141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.534157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.906 [2024-07-26 23:30:08.534168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:22:16.906 [2024-07-26 23:30:08.534177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.637667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.637717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.906 [2024-07-26 23:30:08.637735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.637 ms 00:22:16.906 [2024-07-26 23:30:08.637762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.650292] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:16.906 [2024-07-26 23:30:08.654466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.654493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.906 [2024-07-26 23:30:08.654506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.677 ms 00:22:16.906 [2024-07-26 23:30:08.654517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.654610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.654630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.906 [2024-07-26 23:30:08.654641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:16.906 [2024-07-26 23:30:08.654651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.654731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.654742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.906 [2024-07-26 23:30:08.654753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:16.906 [2024-07-26 23:30:08.654763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.906 [2024-07-26 23:30:08.657126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.906 [2024-07-26 23:30:08.657154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:16.906 [2024-07-26 23:30:08.657170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:22:16.907 [2024-07-26 23:30:08.657180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.907 [2024-07-26 23:30:08.657213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.907 [2024-07-26 23:30:08.657225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.907 [2024-07-26 23:30:08.657236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:16.907 [2024-07-26 23:30:08.657252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.907 [2024-07-26 23:30:08.657298] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.907 [2024-07-26 23:30:08.657310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.907 [2024-07-26 23:30:08.657321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.907 [2024-07-26 23:30:08.657332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:16.907 [2024-07-26 23:30:08.657346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.166 [2024-07-26 23:30:08.694657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.166 [2024-07-26 23:30:08.694691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:17.166 [2024-07-26 23:30:08.694706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.350 ms 00:22:17.166 [2024-07-26 23:30:08.694717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.166 [2024-07-26 23:30:08.694796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.166 [2024-07-26 23:30:08.694816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:17.166 [2024-07-26 23:30:08.694827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:17.166 [2024-07-26 23:30:08.694838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.166 [2024-07-26 23:30:08.696275] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 429.499 ms, result 0 00:23:05.915  Copying: 21/1024 [MB] (21 MBps) Copying: 42/1024 [MB] (20 MBps) Copying: 63/1024 [MB] (20 MBps) Copying: 84/1024 [MB] (21 MBps) Copying: 105/1024 [MB] (21 MBps) Copying: 127/1024 [MB] (21 MBps) Copying: 148/1024 [MB] (21 MBps) Copying: 169/1024 [MB] (21 MBps) Copying: 190/1024 [MB] (20 MBps) Copying: 211/1024 [MB] (21 MBps) Copying: 232/1024 [MB] (21 MBps) Copying: 253/1024 [MB] (20 MBps) Copying: 274/1024 [MB] (21 MBps) Copying: 295/1024 [MB] (21 MBps) Copying: 316/1024 [MB] (21 MBps) Copying: 337/1024 [MB] (20 MBps) Copying: 357/1024 [MB] (20 MBps) Copying: 379/1024 [MB] (21 MBps) Copying: 400/1024 [MB] (21 MBps) Copying: 420/1024 [MB] (20 MBps) Copying: 441/1024 [MB] (21 MBps) Copying: 463/1024 [MB] (21 MBps) Copying: 483/1024 [MB] (20 MBps) Copying: 505/1024 [MB] (21 MBps) Copying: 526/1024 [MB] (21 MBps) Copying: 547/1024 [MB] (21 MBps) Copying: 568/1024 [MB] (21 MBps) Copying: 590/1024 [MB] (21 MBps) Copying: 611/1024 [MB] (21 MBps) Copying: 632/1024 [MB] (21 MBps) Copying: 653/1024 [MB] (20 MBps) Copying: 673/1024 [MB] (20 MBps) Copying: 695/1024 [MB] (21 MBps) Copying: 716/1024 [MB] (21 MBps) Copying: 737/1024 [MB] (21 MBps) Copying: 758/1024 [MB] (21 MBps) Copying: 780/1024 [MB] (21 MBps) Copying: 802/1024 [MB] (21 MBps) Copying: 823/1024 [MB] (21 MBps) Copying: 845/1024 [MB] (21 MBps) Copying: 866/1024 [MB] (21 MBps) Copying: 888/1024 [MB] (21 MBps) Copying: 909/1024 [MB] (21 MBps) Copying: 930/1024 [MB] (21 MBps) Copying: 951/1024 [MB] (21 MBps) Copying: 973/1024 [MB] (21 MBps) Copying: 994/1024 [MB] (21 MBps) Copying: 1015/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 21 MBps)[2024-07-26 23:30:57.419165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.419241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:05.915 [2024-07-26 23:30:57.419260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:05.915 [2024-07-26 23:30:57.419300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.420235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.915 [2024-07-26 23:30:57.424446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.424489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:05.915 [2024-07-26 23:30:57.424503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:23:05.915 [2024-07-26 23:30:57.424514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.435483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.435521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:05.915 [2024-07-26 23:30:57.435535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.784 ms 00:23:05.915 [2024-07-26 23:30:57.435562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.453913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.453952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:05.915 [2024-07-26 23:30:57.453985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.354 ms 00:23:05.915 [2024-07-26 23:30:57.453996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.459154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.459184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:05.915 [2024-07-26 23:30:57.459196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.134 ms 00:23:05.915 [2024-07-26 23:30:57.459207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.495901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.495936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:05.915 [2024-07-26 23:30:57.495949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.703 ms 00:23:05.915 [2024-07-26 23:30:57.495997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.517619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.517658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:05.915 [2024-07-26 23:30:57.517670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.620 ms 00:23:05.915 [2024-07-26 23:30:57.517680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.590050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.590086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:05.915 [2024-07-26 23:30:57.590100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.451 ms 00:23:05.915 [2024-07-26 23:30:57.590110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.626049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.626080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:05.915 [2024-07-26 23:30:57.626093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.981 ms 00:23:05.915 [2024-07-26 23:30:57.626118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.915 [2024-07-26 23:30:57.661784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.915 [2024-07-26 23:30:57.661815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:05.915 [2024-07-26 23:30:57.661828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.680 ms 00:23:05.915 [2024-07-26 23:30:57.661837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.175 [2024-07-26 23:30:57.697589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.176 [2024-07-26 23:30:57.697621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.176 [2024-07-26 23:30:57.697634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.779 ms 00:23:06.176 [2024-07-26 23:30:57.697643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.176 [2024-07-26 23:30:57.732670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.176 [2024-07-26 23:30:57.732704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.176 [2024-07-26 23:30:57.732716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.006 ms 00:23:06.176 [2024-07-26 23:30:57.732725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.176 [2024-07-26 23:30:57.732759] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.176 [2024-07-26 23:30:57.732791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 61184 / 261120 wr_cnt: 1 state: open 00:23:06.176 [2024-07-26 23:30:57.732808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.732998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.176 [2024-07-26 23:30:57.733691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.177 [2024-07-26 23:30:57.733912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.177 [2024-07-26 23:30:57.733922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:23:06.177 [2024-07-26 23:30:57.733933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 61184 00:23:06.177 [2024-07-26 23:30:57.733943] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 62144 00:23:06.177 [2024-07-26 23:30:57.733953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 61184 00:23:06.177 [2024-07-26 23:30:57.733963] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0157 00:23:06.177 [2024-07-26 23:30:57.733973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.177 [2024-07-26 23:30:57.733989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.177 [2024-07-26 23:30:57.734000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.177 [2024-07-26 23:30:57.734018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.177 [2024-07-26 23:30:57.734028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.177 [2024-07-26 23:30:57.734038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.177 [2024-07-26 23:30:57.734048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.177 [2024-07-26 23:30:57.734058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:23:06.177 [2024-07-26 23:30:57.734079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.753788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.177 [2024-07-26 23:30:57.753818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.177 [2024-07-26 23:30:57.753830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.696 ms 00:23:06.177 [2024-07-26 23:30:57.753845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.754148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.177 [2024-07-26 23:30:57.754160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.177 [2024-07-26 23:30:57.754171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:06.177 [2024-07-26 23:30:57.754181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.808206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.177 [2024-07-26 23:30:57.808240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.177 [2024-07-26 23:30:57.808258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.177 [2024-07-26 23:30:57.808284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.808348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.177 [2024-07-26 23:30:57.808360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.177 [2024-07-26 23:30:57.808370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.177 [2024-07-26 23:30:57.808380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.808449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.177 [2024-07-26 23:30:57.808462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.177 [2024-07-26 23:30:57.808472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.177 [2024-07-26 23:30:57.808487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.808505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.177 [2024-07-26 23:30:57.808516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.177 [2024-07-26 23:30:57.808526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.177 [2024-07-26 23:30:57.808536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.177 [2024-07-26 23:30:57.924644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.177 [2024-07-26 23:30:57.924693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.177 [2024-07-26 23:30:57.924716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.177 [2024-07-26 23:30:57.924727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.971854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.971895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.437 [2024-07-26 23:30:57.971909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.971936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.437 [2024-07-26 23:30:57.972076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.437 [2024-07-26 23:30:57.972199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.437 [2024-07-26 23:30:57.972356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.437 [2024-07-26 23:30:57.972432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.437 [2024-07-26 23:30:57.972514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.437 [2024-07-26 23:30:57.972591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.437 [2024-07-26 23:30:57.972602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.437 [2024-07-26 23:30:57.972612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.437 [2024-07-26 23:30:57.972756] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.782 ms, result 0 00:23:08.344 00:23:08.344 00:23:08.344 23:30:59 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:08.344 [2024-07-26 23:30:59.846205] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:08.344 [2024-07-26 23:30:59.846322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76119 ] 00:23:08.344 [2024-07-26 23:31:00.016349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.603 [2024-07-26 23:31:00.278235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.169 [2024-07-26 23:31:00.709958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.169 [2024-07-26 23:31:00.710040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.169 [2024-07-26 23:31:00.867575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.867628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.169 [2024-07-26 23:31:00.867645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:09.169 [2024-07-26 23:31:00.867671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.867721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.867733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.169 [2024-07-26 23:31:00.867745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:09.169 [2024-07-26 23:31:00.867755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.867776] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.169 [2024-07-26 23:31:00.868926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.169 [2024-07-26 23:31:00.868957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.868977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.169 [2024-07-26 23:31:00.868988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:23:09.169 [2024-07-26 23:31:00.868999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.871336] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:09.169 [2024-07-26 23:31:00.891541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.891579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:09.169 [2024-07-26 23:31:00.891598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.238 ms 00:23:09.169 [2024-07-26 23:31:00.891624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.891686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.891698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:09.169 [2024-07-26 23:31:00.891709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:09.169 [2024-07-26 23:31:00.891719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.903304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.903334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.169 [2024-07-26 23:31:00.903346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.535 ms 00:23:09.169 [2024-07-26 23:31:00.903355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.903446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.903460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.169 [2024-07-26 23:31:00.903469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:09.169 [2024-07-26 23:31:00.903479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.903529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.903544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.169 [2024-07-26 23:31:00.903554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:09.169 [2024-07-26 23:31:00.903563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.903590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.169 [2024-07-26 23:31:00.910027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.910056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.169 [2024-07-26 23:31:00.910068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.455 ms 00:23:09.169 [2024-07-26 23:31:00.910078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.910113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.910123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.169 [2024-07-26 23:31:00.910133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:09.169 [2024-07-26 23:31:00.910142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.169 [2024-07-26 23:31:00.910174] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:09.169 [2024-07-26 23:31:00.910204] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:09.169 [2024-07-26 23:31:00.910238] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:09.169 [2024-07-26 23:31:00.910254] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:09.169 [2024-07-26 23:31:00.910318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:09.169 [2024-07-26 23:31:00.910331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.169 [2024-07-26 23:31:00.910343] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:09.169 [2024-07-26 23:31:00.910356] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.169 [2024-07-26 23:31:00.910368] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.169 [2024-07-26 23:31:00.910383] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:09.169 [2024-07-26 23:31:00.910394] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.169 [2024-07-26 23:31:00.910403] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:09.169 [2024-07-26 23:31:00.910412] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:09.169 [2024-07-26 23:31:00.910422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.169 [2024-07-26 23:31:00.910432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.170 [2024-07-26 23:31:00.910442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:23:09.170 [2024-07-26 23:31:00.910452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.170 [2024-07-26 23:31:00.910504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.170 [2024-07-26 23:31:00.910514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.170 [2024-07-26 23:31:00.910528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:09.170 [2024-07-26 23:31:00.910537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.170 [2024-07-26 23:31:00.910598] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.170 [2024-07-26 23:31:00.910610] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.170 [2024-07-26 23:31:00.910621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910640] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.170 [2024-07-26 23:31:00.910649] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.170 [2024-07-26 23:31:00.910678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.170 [2024-07-26 23:31:00.910696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.170 [2024-07-26 23:31:00.910705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:09.170 [2024-07-26 23:31:00.910714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.170 [2024-07-26 23:31:00.910723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.170 [2024-07-26 23:31:00.910731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:09.170 [2024-07-26 23:31:00.910740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.170 [2024-07-26 23:31:00.910758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:09.170 [2024-07-26 23:31:00.910766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910775] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:09.170 [2024-07-26 23:31:00.910784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:09.170 [2024-07-26 23:31:00.910804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910812] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.170 [2024-07-26 23:31:00.910821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.170 [2024-07-26 23:31:00.910847] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.170 [2024-07-26 23:31:00.910873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.170 [2024-07-26 23:31:00.910899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:09.170 [2024-07-26 23:31:00.910916] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.170 [2024-07-26 23:31:00.910925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:09.170 [2024-07-26 23:31:00.910932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.170 [2024-07-26 23:31:00.910940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.170 [2024-07-26 23:31:00.910949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:09.170 [2024-07-26 23:31:00.910957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.170 [2024-07-26 23:31:00.910984] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.170 [2024-07-26 23:31:00.910994] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.170 [2024-07-26 23:31:00.911020] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.170 [2024-07-26 23:31:00.911037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.170 [2024-07-26 23:31:00.911047] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.170 [2024-07-26 23:31:00.911057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.170 [2024-07-26 23:31:00.911066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.170 [2024-07-26 23:31:00.911075] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.170 [2024-07-26 23:31:00.911084] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.170 [2024-07-26 23:31:00.911094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.170 [2024-07-26 23:31:00.911104] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.170 [2024-07-26 23:31:00.911116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.170 [2024-07-26 23:31:00.911128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:09.170 [2024-07-26 23:31:00.911138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:09.170 [2024-07-26 23:31:00.911148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:09.170 [2024-07-26 23:31:00.911158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:09.170 [2024-07-26 23:31:00.911168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:09.170 [2024-07-26 23:31:00.911178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:09.170 [2024-07-26 23:31:00.911188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:09.170 [2024-07-26 23:31:00.911198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:09.170 [2024-07-26 23:31:00.911225] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:09.170 [2024-07-26 23:31:00.911236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:09.170 [2024-07-26 23:31:00.911246] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:09.170 [2024-07-26 23:31:00.911256] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:09.170 [2024-07-26 23:31:00.911266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:09.170 [2024-07-26 23:31:00.911280] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.170 [2024-07-26 23:31:00.911291] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.170 [2024-07-26 23:31:00.911302] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.170 [2024-07-26 23:31:00.911312] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.170 [2024-07-26 23:31:00.911323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.170 [2024-07-26 23:31:00.911332] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.170 [2024-07-26 23:31:00.911343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.170 [2024-07-26 23:31:00.911355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.170 [2024-07-26 23:31:00.911365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:23:09.170 [2024-07-26 23:31:00.911375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:00.941005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:00.941037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.428 [2024-07-26 23:31:00.941050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.634 ms 00:23:09.428 [2024-07-26 23:31:00.941061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:00.941150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:00.941165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.428 [2024-07-26 23:31:00.941175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:09.428 [2024-07-26 23:31:00.941185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.017593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.017627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.428 [2024-07-26 23:31:01.017640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.483 ms 00:23:09.428 [2024-07-26 23:31:01.017671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.017704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.017716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.428 [2024-07-26 23:31:01.017726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:09.428 [2024-07-26 23:31:01.017737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.018553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.018573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.428 [2024-07-26 23:31:01.018584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:23:09.428 [2024-07-26 23:31:01.018594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.018710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.018724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.428 [2024-07-26 23:31:01.018735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:23:09.428 [2024-07-26 23:31:01.018745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.043803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.043834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.428 [2024-07-26 23:31:01.043847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.074 ms 00:23:09.428 [2024-07-26 23:31:01.043857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.064131] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:09.428 [2024-07-26 23:31:01.064170] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.428 [2024-07-26 23:31:01.064185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.064197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.428 [2024-07-26 23:31:01.064208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.239 ms 00:23:09.428 [2024-07-26 23:31:01.064219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.094858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.094908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:09.428 [2024-07-26 23:31:01.094923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.646 ms 00:23:09.428 [2024-07-26 23:31:01.094934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.113222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.113361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:09.428 [2024-07-26 23:31:01.113381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.165 ms 00:23:09.428 [2024-07-26 23:31:01.113392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.131723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.131756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:09.428 [2024-07-26 23:31:01.131770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.253 ms 00:23:09.428 [2024-07-26 23:31:01.131780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.428 [2024-07-26 23:31:01.132313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.428 [2024-07-26 23:31:01.132340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:09.428 [2024-07-26 23:31:01.132352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:23:09.428 [2024-07-26 23:31:01.132363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.226542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.226593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:09.686 [2024-07-26 23:31:01.226619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.308 ms 00:23:09.686 [2024-07-26 23:31:01.226629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.238430] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:09.686 [2024-07-26 23:31:01.241735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.241763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:09.686 [2024-07-26 23:31:01.241775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.087 ms 00:23:09.686 [2024-07-26 23:31:01.241785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.241857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.241873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:09.686 [2024-07-26 23:31:01.241884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:09.686 [2024-07-26 23:31:01.241894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.243526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.243563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:09.686 [2024-07-26 23:31:01.243576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:23:09.686 [2024-07-26 23:31:01.243587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.245972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.246005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:09.686 [2024-07-26 23:31:01.246020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.364 ms 00:23:09.686 [2024-07-26 23:31:01.246030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.246062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.246072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:09.686 [2024-07-26 23:31:01.246083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:09.686 [2024-07-26 23:31:01.246098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.246151] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:09.686 [2024-07-26 23:31:01.246164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.246174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:09.686 [2024-07-26 23:31:01.246184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:09.686 [2024-07-26 23:31:01.246197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.283028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.283063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:09.686 [2024-07-26 23:31:01.283077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.870 ms 00:23:09.686 [2024-07-26 23:31:01.283103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.283179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.686 [2024-07-26 23:31:01.283198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:09.686 [2024-07-26 23:31:01.283219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:09.686 [2024-07-26 23:31:01.283229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.686 [2024-07-26 23:31:01.285959] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.225 ms, result 0 00:23:53.281  Copying: 14/1024 [MB] (14 MBps) Copying: 38/1024 [MB] (24 MBps) Copying: 62/1024 [MB] (24 MBps) Copying: 86/1024 [MB] (24 MBps) Copying: 111/1024 [MB] (24 MBps) Copying: 135/1024 [MB] (24 MBps) Copying: 160/1024 [MB] (24 MBps) Copying: 184/1024 [MB] (24 MBps) Copying: 208/1024 [MB] (24 MBps) Copying: 233/1024 [MB] (24 MBps) Copying: 257/1024 [MB] (24 MBps) Copying: 281/1024 [MB] (24 MBps) Copying: 305/1024 [MB] (23 MBps) Copying: 329/1024 [MB] (24 MBps) Copying: 353/1024 [MB] (24 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (24 MBps) Copying: 425/1024 [MB] (24 MBps) Copying: 449/1024 [MB] (24 MBps) Copying: 473/1024 [MB] (23 MBps) Copying: 498/1024 [MB] (24 MBps) Copying: 522/1024 [MB] (24 MBps) Copying: 547/1024 [MB] (24 MBps) Copying: 571/1024 [MB] (24 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 621/1024 [MB] (24 MBps) Copying: 646/1024 [MB] (24 MBps) Copying: 671/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (24 MBps) Copying: 721/1024 [MB] (24 MBps) Copying: 745/1024 [MB] (24 MBps) Copying: 769/1024 [MB] (24 MBps) Copying: 793/1024 [MB] (24 MBps) Copying: 818/1024 [MB] (24 MBps) Copying: 843/1024 [MB] (24 MBps) Copying: 868/1024 [MB] (24 MBps) Copying: 892/1024 [MB] (24 MBps) Copying: 916/1024 [MB] (24 MBps) Copying: 941/1024 [MB] (24 MBps) Copying: 965/1024 [MB] (24 MBps) Copying: 989/1024 [MB] (24 MBps) Copying: 1013/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-26 23:31:44.772588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.772660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.281 [2024-07-26 23:31:44.772682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:53.281 [2024-07-26 23:31:44.772694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.772732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.281 [2024-07-26 23:31:44.776748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.776783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.281 [2024-07-26 23:31:44.776796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:23:53.281 [2024-07-26 23:31:44.776808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.777085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.777103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.281 [2024-07-26 23:31:44.777115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:23:53.281 [2024-07-26 23:31:44.777126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.784992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.785031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.281 [2024-07-26 23:31:44.785045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.852 ms 00:23:53.281 [2024-07-26 23:31:44.785057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.790058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.790095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:53.281 [2024-07-26 23:31:44.790106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:23:53.281 [2024-07-26 23:31:44.790132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.831045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.831080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.281 [2024-07-26 23:31:44.831110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.920 ms 00:23:53.281 [2024-07-26 23:31:44.831120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:44.852527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:44.852564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.281 [2024-07-26 23:31:44.852577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.404 ms 00:23:53.281 [2024-07-26 23:31:44.852587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.281 [2024-07-26 23:31:45.017386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.281 [2024-07-26 23:31:45.017421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.281 [2024-07-26 23:31:45.017436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 165.026 ms 00:23:53.281 [2024-07-26 23:31:45.017446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.542 [2024-07-26 23:31:45.054376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.542 [2024-07-26 23:31:45.054406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:53.542 [2024-07-26 23:31:45.054419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.973 ms 00:23:53.542 [2024-07-26 23:31:45.054428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.542 [2024-07-26 23:31:45.089945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.542 [2024-07-26 23:31:45.089990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:53.542 [2024-07-26 23:31:45.090003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.540 ms 00:23:53.542 [2024-07-26 23:31:45.090012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.542 [2024-07-26 23:31:45.124889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.542 [2024-07-26 23:31:45.124920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.542 [2024-07-26 23:31:45.124932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.900 ms 00:23:53.542 [2024-07-26 23:31:45.124941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.542 [2024-07-26 23:31:45.159540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.542 [2024-07-26 23:31:45.159569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.542 [2024-07-26 23:31:45.159580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.568 ms 00:23:53.542 [2024-07-26 23:31:45.159605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.542 [2024-07-26 23:31:45.159639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.543 [2024-07-26 23:31:45.159657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:23:53.543 [2024-07-26 23:31:45.159671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.159999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.543 [2024-07-26 23:31:45.160524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.544 [2024-07-26 23:31:45.160755] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.544 [2024-07-26 23:31:45.160766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed2b9793-1462-4ab0-96b4-6bb7a0394f8d 00:23:53.544 [2024-07-26 23:31:45.160776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:23:53.544 [2024-07-26 23:31:45.160786] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 73408 00:23:53.544 [2024-07-26 23:31:45.160796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 72448 00:23:53.544 [2024-07-26 23:31:45.160806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0133 00:23:53.544 [2024-07-26 23:31:45.160815] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.544 [2024-07-26 23:31:45.160825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.544 [2024-07-26 23:31:45.160841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.544 [2024-07-26 23:31:45.160850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.544 [2024-07-26 23:31:45.160859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.544 [2024-07-26 23:31:45.160868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.544 [2024-07-26 23:31:45.160878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.544 [2024-07-26 23:31:45.160889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:23:53.544 [2024-07-26 23:31:45.160899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.180801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.544 [2024-07-26 23:31:45.180828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.544 [2024-07-26 23:31:45.180840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.882 ms 00:23:53.544 [2024-07-26 23:31:45.180850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.181192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.544 [2024-07-26 23:31:45.181205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.544 [2024-07-26 23:31:45.181216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:23:53.544 [2024-07-26 23:31:45.181226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.232621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.544 [2024-07-26 23:31:45.232652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.544 [2024-07-26 23:31:45.232685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.544 [2024-07-26 23:31:45.232696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.232757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.544 [2024-07-26 23:31:45.232769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.544 [2024-07-26 23:31:45.232779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.544 [2024-07-26 23:31:45.232789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.232856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.544 [2024-07-26 23:31:45.232869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.544 [2024-07-26 23:31:45.232880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.544 [2024-07-26 23:31:45.232895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.544 [2024-07-26 23:31:45.232912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.544 [2024-07-26 23:31:45.232923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.544 [2024-07-26 23:31:45.232934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.544 [2024-07-26 23:31:45.232943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.804 [2024-07-26 23:31:45.353251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.804 [2024-07-26 23:31:45.353300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.804 [2024-07-26 23:31:45.353320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.804 [2024-07-26 23:31:45.353331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.804 [2024-07-26 23:31:45.398556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.804 [2024-07-26 23:31:45.398593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.804 [2024-07-26 23:31:45.398606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.804 [2024-07-26 23:31:45.398617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.804 [2024-07-26 23:31:45.398707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.804 [2024-07-26 23:31:45.398719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.804 [2024-07-26 23:31:45.398729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.804 [2024-07-26 23:31:45.398739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.804 [2024-07-26 23:31:45.398790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.804 [2024-07-26 23:31:45.398801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.804 [2024-07-26 23:31:45.398810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.804 [2024-07-26 23:31:45.398820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.805 [2024-07-26 23:31:45.398956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.805 [2024-07-26 23:31:45.398985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.805 [2024-07-26 23:31:45.399013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.805 [2024-07-26 23:31:45.399024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.805 [2024-07-26 23:31:45.399068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.805 [2024-07-26 23:31:45.399082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.805 [2024-07-26 23:31:45.399092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.805 [2024-07-26 23:31:45.399102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.805 [2024-07-26 23:31:45.399146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.805 [2024-07-26 23:31:45.399157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.805 [2024-07-26 23:31:45.399168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.805 [2024-07-26 23:31:45.399178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.805 [2024-07-26 23:31:45.399233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.805 [2024-07-26 23:31:45.399245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.805 [2024-07-26 23:31:45.399255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.805 [2024-07-26 23:31:45.399265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.805 [2024-07-26 23:31:45.399409] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.800 ms, result 0 00:23:55.232 00:23:55.232 00:23:55.232 23:31:46 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:57.140 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:57.140 23:31:48 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:57.140 23:31:48 -- ftl/restore.sh@85 -- # restore_kill 00:23:57.140 23:31:48 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:57.140 23:31:48 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:57.140 23:31:48 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:57.140 23:31:48 -- ftl/restore.sh@32 -- # killprocess 74345 00:23:57.140 23:31:48 -- common/autotest_common.sh@926 -- # '[' -z 74345 ']' 00:23:57.140 23:31:48 -- common/autotest_common.sh@930 -- # kill -0 74345 00:23:57.140 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74345) - No such process 00:23:57.140 23:31:48 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74345 is not found' 00:23:57.140 Process with pid 74345 is not found 00:23:57.140 23:31:48 -- ftl/restore.sh@33 -- # remove_shm 00:23:57.140 Remove shared memory files 00:23:57.140 23:31:48 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:57.140 23:31:48 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:57.140 23:31:48 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:57.140 23:31:48 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:57.140 23:31:48 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:57.140 23:31:48 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:57.140 00:23:57.140 real 3m37.378s 00:23:57.140 user 3m23.864s 00:23:57.140 sys 0m14.016s 00:23:57.140 23:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.140 23:31:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.140 ************************************ 00:23:57.140 END TEST ftl_restore 00:23:57.140 ************************************ 00:23:57.140 23:31:48 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:57.140 23:31:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:57.140 23:31:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:57.140 23:31:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.140 ************************************ 00:23:57.140 START TEST ftl_dirty_shutdown 00:23:57.140 ************************************ 00:23:57.140 23:31:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:23:57.140 * Looking for test storage... 00:23:57.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.140 23:31:48 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:57.140 23:31:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:57.140 23:31:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.140 23:31:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:57.140 23:31:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:57.141 23:31:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:57.141 23:31:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.141 23:31:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:57.141 23:31:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:57.141 23:31:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.141 23:31:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.141 23:31:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:57.141 23:31:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:57.141 23:31:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:57.141 23:31:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:57.141 23:31:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:57.141 23:31:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:57.141 23:31:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.141 23:31:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:57.141 23:31:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:57.141 23:31:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:57.141 23:31:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:57.141 23:31:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:57.141 23:31:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:57.141 23:31:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:57.141 23:31:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:57.141 23:31:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:57.141 23:31:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.141 23:31:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76672 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:57.141 23:31:48 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76672 00:23:57.141 23:31:48 -- common/autotest_common.sh@819 -- # '[' -z 76672 ']' 00:23:57.141 23:31:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.141 23:31:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.141 23:31:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.141 23:31:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.141 23:31:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.401 [2024-07-26 23:31:48.987323] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:57.401 [2024-07-26 23:31:48.987464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76672 ] 00:23:57.660 [2024-07-26 23:31:49.162489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.920 [2024-07-26 23:31:49.432105] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.920 [2024-07-26 23:31:49.432311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.828 23:31:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:59.828 23:31:51 -- common/autotest_common.sh@852 -- # return 0 00:23:59.828 23:31:51 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:23:59.828 23:31:51 -- ftl/common.sh@54 -- # local name=nvme0 00:23:59.828 23:31:51 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:23:59.828 23:31:51 -- ftl/common.sh@56 -- # local size=103424 00:23:59.828 23:31:51 -- ftl/common.sh@59 -- # local base_bdev 00:23:59.828 23:31:51 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:23:59.828 23:31:51 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:59.828 23:31:51 -- ftl/common.sh@62 -- # local base_size 00:23:59.828 23:31:51 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:59.828 23:31:51 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:23:59.828 23:31:51 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:59.828 23:31:51 -- common/autotest_common.sh@1359 -- # local bs 00:23:59.828 23:31:51 -- common/autotest_common.sh@1360 -- # local nb 00:23:59.828 23:31:51 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:59.828 23:31:51 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:59.828 { 00:23:59.828 "name": "nvme0n1", 00:23:59.828 "aliases": [ 00:23:59.828 "8b1c8139-e5f5-49d6-a792-acaa0b1e5483" 00:23:59.828 ], 00:23:59.828 "product_name": "NVMe disk", 00:23:59.828 "block_size": 4096, 00:23:59.828 "num_blocks": 1310720, 00:23:59.828 "uuid": "8b1c8139-e5f5-49d6-a792-acaa0b1e5483", 00:23:59.828 "assigned_rate_limits": { 00:23:59.828 "rw_ios_per_sec": 0, 00:23:59.828 "rw_mbytes_per_sec": 0, 00:23:59.828 "r_mbytes_per_sec": 0, 00:23:59.828 "w_mbytes_per_sec": 0 00:23:59.828 }, 00:23:59.828 "claimed": true, 00:23:59.828 "claim_type": "read_many_write_one", 00:23:59.828 "zoned": false, 00:23:59.828 "supported_io_types": { 00:23:59.828 "read": true, 00:23:59.828 "write": true, 00:23:59.828 "unmap": true, 00:23:59.828 "write_zeroes": true, 00:23:59.828 "flush": true, 00:23:59.828 "reset": true, 00:23:59.828 "compare": true, 00:23:59.828 "compare_and_write": false, 00:23:59.828 "abort": true, 00:23:59.828 "nvme_admin": true, 00:23:59.828 "nvme_io": true 00:23:59.828 }, 00:23:59.828 "driver_specific": { 00:23:59.828 "nvme": [ 00:23:59.828 { 00:23:59.828 "pci_address": "0000:00:07.0", 00:23:59.828 "trid": { 00:23:59.828 "trtype": "PCIe", 00:23:59.828 "traddr": "0000:00:07.0" 00:23:59.828 }, 00:23:59.828 "ctrlr_data": { 00:23:59.828 "cntlid": 0, 00:23:59.828 "vendor_id": "0x1b36", 00:23:59.828 "model_number": "QEMU NVMe Ctrl", 00:23:59.828 "serial_number": "12341", 00:23:59.828 "firmware_revision": "8.0.0", 00:23:59.828 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:59.828 "oacs": { 00:23:59.828 "security": 0, 00:23:59.828 "format": 1, 00:23:59.828 "firmware": 0, 00:23:59.828 "ns_manage": 1 00:23:59.828 }, 00:23:59.828 "multi_ctrlr": false, 00:23:59.828 "ana_reporting": false 00:23:59.828 }, 00:23:59.828 "vs": { 00:23:59.828 "nvme_version": "1.4" 00:23:59.828 }, 00:23:59.828 "ns_data": { 00:23:59.828 "id": 1, 00:23:59.828 "can_share": false 00:23:59.828 } 00:23:59.828 } 00:23:59.828 ], 00:23:59.828 "mp_policy": "active_passive" 00:23:59.828 } 00:23:59.828 } 00:23:59.828 ]' 00:23:59.828 23:31:51 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:59.828 23:31:51 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:59.828 23:31:51 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:24:00.088 23:31:51 -- common/autotest_common.sh@1363 -- # nb=1310720 00:24:00.088 23:31:51 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:24:00.088 23:31:51 -- common/autotest_common.sh@1367 -- # echo 5120 00:24:00.088 23:31:51 -- ftl/common.sh@63 -- # base_size=5120 00:24:00.088 23:31:51 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:00.088 23:31:51 -- ftl/common.sh@67 -- # clear_lvols 00:24:00.088 23:31:51 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:00.088 23:31:51 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:00.088 23:31:51 -- ftl/common.sh@28 -- # stores=163a7699-2707-42b2-bc57-107779331483 00:24:00.088 23:31:51 -- ftl/common.sh@29 -- # for lvs in $stores 00:24:00.088 23:31:51 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 163a7699-2707-42b2-bc57-107779331483 00:24:00.348 23:31:51 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:00.607 23:31:52 -- ftl/common.sh@68 -- # lvs=07aba7e3-9de6-4567-9306-2f0508719c59 00:24:00.607 23:31:52 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 07aba7e3-9de6-4567-9306-2f0508719c59 00:24:00.866 23:31:52 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:24:00.866 23:31:52 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- ftl/common.sh@35 -- # local name=nvc0 00:24:00.866 23:31:52 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:24:00.866 23:31:52 -- ftl/common.sh@37 -- # local base_bdev=d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- ftl/common.sh@38 -- # local cache_size= 00:24:00.866 23:31:52 -- ftl/common.sh@41 -- # get_bdev_size d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- common/autotest_common.sh@1357 -- # local bdev_name=d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:24:00.866 23:31:52 -- common/autotest_common.sh@1359 -- # local bs 00:24:00.866 23:31:52 -- common/autotest_common.sh@1360 -- # local nb 00:24:00.866 23:31:52 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:00.866 23:31:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:24:00.866 { 00:24:00.866 "name": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:00.866 "aliases": [ 00:24:00.866 "lvs/nvme0n1p0" 00:24:00.866 ], 00:24:00.866 "product_name": "Logical Volume", 00:24:00.866 "block_size": 4096, 00:24:00.866 "num_blocks": 26476544, 00:24:00.866 "uuid": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:00.866 "assigned_rate_limits": { 00:24:00.866 "rw_ios_per_sec": 0, 00:24:00.866 "rw_mbytes_per_sec": 0, 00:24:00.866 "r_mbytes_per_sec": 0, 00:24:00.866 "w_mbytes_per_sec": 0 00:24:00.866 }, 00:24:00.866 "claimed": false, 00:24:00.866 "zoned": false, 00:24:00.866 "supported_io_types": { 00:24:00.866 "read": true, 00:24:00.866 "write": true, 00:24:00.866 "unmap": true, 00:24:00.866 "write_zeroes": true, 00:24:00.866 "flush": false, 00:24:00.866 "reset": true, 00:24:00.866 "compare": false, 00:24:00.866 "compare_and_write": false, 00:24:00.866 "abort": false, 00:24:00.866 "nvme_admin": false, 00:24:00.866 "nvme_io": false 00:24:00.866 }, 00:24:00.866 "driver_specific": { 00:24:00.866 "lvol": { 00:24:00.866 "lvol_store_uuid": "07aba7e3-9de6-4567-9306-2f0508719c59", 00:24:00.866 "base_bdev": "nvme0n1", 00:24:00.866 "thin_provision": true, 00:24:00.866 "snapshot": false, 00:24:00.866 "clone": false, 00:24:00.866 "esnap_clone": false 00:24:00.866 } 00:24:00.866 } 00:24:00.866 } 00:24:00.866 ]' 00:24:00.866 23:31:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:24:00.866 23:31:52 -- common/autotest_common.sh@1362 -- # bs=4096 00:24:00.866 23:31:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:24:01.126 23:31:52 -- common/autotest_common.sh@1363 -- # nb=26476544 00:24:01.126 23:31:52 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:24:01.126 23:31:52 -- common/autotest_common.sh@1367 -- # echo 103424 00:24:01.126 23:31:52 -- ftl/common.sh@41 -- # local base_size=5171 00:24:01.126 23:31:52 -- ftl/common.sh@44 -- # local nvc_bdev 00:24:01.126 23:31:52 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:24:01.126 23:31:52 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:01.126 23:31:52 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:01.126 23:31:52 -- ftl/common.sh@48 -- # get_bdev_size d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.126 23:31:52 -- common/autotest_common.sh@1357 -- # local bdev_name=d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.126 23:31:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:24:01.126 23:31:52 -- common/autotest_common.sh@1359 -- # local bs 00:24:01.126 23:31:52 -- common/autotest_common.sh@1360 -- # local nb 00:24:01.126 23:31:52 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.385 23:31:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:24:01.385 { 00:24:01.385 "name": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:01.385 "aliases": [ 00:24:01.385 "lvs/nvme0n1p0" 00:24:01.385 ], 00:24:01.385 "product_name": "Logical Volume", 00:24:01.385 "block_size": 4096, 00:24:01.385 "num_blocks": 26476544, 00:24:01.385 "uuid": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:01.385 "assigned_rate_limits": { 00:24:01.385 "rw_ios_per_sec": 0, 00:24:01.385 "rw_mbytes_per_sec": 0, 00:24:01.385 "r_mbytes_per_sec": 0, 00:24:01.385 "w_mbytes_per_sec": 0 00:24:01.385 }, 00:24:01.385 "claimed": false, 00:24:01.385 "zoned": false, 00:24:01.385 "supported_io_types": { 00:24:01.385 "read": true, 00:24:01.385 "write": true, 00:24:01.385 "unmap": true, 00:24:01.385 "write_zeroes": true, 00:24:01.385 "flush": false, 00:24:01.385 "reset": true, 00:24:01.385 "compare": false, 00:24:01.385 "compare_and_write": false, 00:24:01.385 "abort": false, 00:24:01.385 "nvme_admin": false, 00:24:01.385 "nvme_io": false 00:24:01.385 }, 00:24:01.385 "driver_specific": { 00:24:01.385 "lvol": { 00:24:01.385 "lvol_store_uuid": "07aba7e3-9de6-4567-9306-2f0508719c59", 00:24:01.385 "base_bdev": "nvme0n1", 00:24:01.385 "thin_provision": true, 00:24:01.385 "snapshot": false, 00:24:01.386 "clone": false, 00:24:01.386 "esnap_clone": false 00:24:01.386 } 00:24:01.386 } 00:24:01.386 } 00:24:01.386 ]' 00:24:01.386 23:31:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:24:01.386 23:31:53 -- common/autotest_common.sh@1362 -- # bs=4096 00:24:01.386 23:31:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:24:01.386 23:31:53 -- common/autotest_common.sh@1363 -- # nb=26476544 00:24:01.386 23:31:53 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:24:01.386 23:31:53 -- common/autotest_common.sh@1367 -- # echo 103424 00:24:01.386 23:31:53 -- ftl/common.sh@48 -- # cache_size=5171 00:24:01.386 23:31:53 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:01.645 23:31:53 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:01.645 23:31:53 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.645 23:31:53 -- common/autotest_common.sh@1357 -- # local bdev_name=d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.645 23:31:53 -- common/autotest_common.sh@1358 -- # local bdev_info 00:24:01.645 23:31:53 -- common/autotest_common.sh@1359 -- # local bs 00:24:01.645 23:31:53 -- common/autotest_common.sh@1360 -- # local nb 00:24:01.645 23:31:53 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d112c8d6-1567-40b0-afe8-9d7e476d0b2f 00:24:01.905 23:31:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:24:01.905 { 00:24:01.905 "name": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:01.905 "aliases": [ 00:24:01.905 "lvs/nvme0n1p0" 00:24:01.905 ], 00:24:01.905 "product_name": "Logical Volume", 00:24:01.905 "block_size": 4096, 00:24:01.905 "num_blocks": 26476544, 00:24:01.905 "uuid": "d112c8d6-1567-40b0-afe8-9d7e476d0b2f", 00:24:01.905 "assigned_rate_limits": { 00:24:01.905 "rw_ios_per_sec": 0, 00:24:01.905 "rw_mbytes_per_sec": 0, 00:24:01.905 "r_mbytes_per_sec": 0, 00:24:01.905 "w_mbytes_per_sec": 0 00:24:01.905 }, 00:24:01.905 "claimed": false, 00:24:01.905 "zoned": false, 00:24:01.905 "supported_io_types": { 00:24:01.905 "read": true, 00:24:01.905 "write": true, 00:24:01.905 "unmap": true, 00:24:01.905 "write_zeroes": true, 00:24:01.905 "flush": false, 00:24:01.905 "reset": true, 00:24:01.905 "compare": false, 00:24:01.905 "compare_and_write": false, 00:24:01.905 "abort": false, 00:24:01.905 "nvme_admin": false, 00:24:01.905 "nvme_io": false 00:24:01.905 }, 00:24:01.905 "driver_specific": { 00:24:01.905 "lvol": { 00:24:01.905 "lvol_store_uuid": "07aba7e3-9de6-4567-9306-2f0508719c59", 00:24:01.905 "base_bdev": "nvme0n1", 00:24:01.905 "thin_provision": true, 00:24:01.905 "snapshot": false, 00:24:01.905 "clone": false, 00:24:01.905 "esnap_clone": false 00:24:01.905 } 00:24:01.905 } 00:24:01.905 } 00:24:01.905 ]' 00:24:01.905 23:31:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:24:01.905 23:31:53 -- common/autotest_common.sh@1362 -- # bs=4096 00:24:01.905 23:31:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:24:01.905 23:31:53 -- common/autotest_common.sh@1363 -- # nb=26476544 00:24:01.905 23:31:53 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:24:01.905 23:31:53 -- common/autotest_common.sh@1367 -- # echo 103424 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d112c8d6-1567-40b0-afe8-9d7e476d0b2f --l2p_dram_limit 10' 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:01.905 23:31:53 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d112c8d6-1567-40b0-afe8-9d7e476d0b2f --l2p_dram_limit 10 -c nvc0n1p0 00:24:02.165 [2024-07-26 23:31:53.714461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.714519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:02.165 [2024-07-26 23:31:53.714539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:02.165 [2024-07-26 23:31:53.714550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.714631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.714643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.165 [2024-07-26 23:31:53.714656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:02.165 [2024-07-26 23:31:53.714666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.714691] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:02.165 [2024-07-26 23:31:53.715935] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:02.165 [2024-07-26 23:31:53.715994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.716006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.165 [2024-07-26 23:31:53.716020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:24:02.165 [2024-07-26 23:31:53.716031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.716134] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa1159e5-ef10-4e53-96ba-e9c2ed419c79 00:24:02.165 [2024-07-26 23:31:53.718451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.718489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:02.165 [2024-07-26 23:31:53.718502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:02.165 [2024-07-26 23:31:53.718516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.731841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.731882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.165 [2024-07-26 23:31:53.731895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.285 ms 00:24:02.165 [2024-07-26 23:31:53.731908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.732054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.732073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.165 [2024-07-26 23:31:53.732085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:24:02.165 [2024-07-26 23:31:53.732103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.732182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.732198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:02.165 [2024-07-26 23:31:53.732209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:02.165 [2024-07-26 23:31:53.732226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.732258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.165 [2024-07-26 23:31:53.738917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.738952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.165 [2024-07-26 23:31:53.738976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.681 ms 00:24:02.165 [2024-07-26 23:31:53.738987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.739026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.739037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:02.165 [2024-07-26 23:31:53.739050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:02.165 [2024-07-26 23:31:53.739059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.739109] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:02.165 [2024-07-26 23:31:53.739244] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:02.165 [2024-07-26 23:31:53.739265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:02.165 [2024-07-26 23:31:53.739279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:02.165 [2024-07-26 23:31:53.739296] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739308] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739322] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:02.165 [2024-07-26 23:31:53.739332] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:02.165 [2024-07-26 23:31:53.739346] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:02.165 [2024-07-26 23:31:53.739360] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:02.165 [2024-07-26 23:31:53.739373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.739383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:02.165 [2024-07-26 23:31:53.739409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:02.165 [2024-07-26 23:31:53.739419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.739482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.165 [2024-07-26 23:31:53.739492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:02.165 [2024-07-26 23:31:53.739505] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:02.165 [2024-07-26 23:31:53.739514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.165 [2024-07-26 23:31:53.739609] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:02.165 [2024-07-26 23:31:53.739621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:02.165 [2024-07-26 23:31:53.739635] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:02.165 [2024-07-26 23:31:53.739668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:02.165 [2024-07-26 23:31:53.739701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.165 [2024-07-26 23:31:53.739723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:02.165 [2024-07-26 23:31:53.739732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:02.165 [2024-07-26 23:31:53.739746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.165 [2024-07-26 23:31:53.739757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:02.165 [2024-07-26 23:31:53.739769] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:02.165 [2024-07-26 23:31:53.739778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:02.165 [2024-07-26 23:31:53.739803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:02.165 [2024-07-26 23:31:53.739814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:02.165 [2024-07-26 23:31:53.739835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:02.165 [2024-07-26 23:31:53.739844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:02.165 [2024-07-26 23:31:53.739865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:02.165 [2024-07-26 23:31:53.739885] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:02.165 [2024-07-26 23:31:53.739897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:02.165 [2024-07-26 23:31:53.739906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:02.166 [2024-07-26 23:31:53.739918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:02.166 [2024-07-26 23:31:53.739927] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:02.166 [2024-07-26 23:31:53.739938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:02.166 [2024-07-26 23:31:53.739958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:02.166 [2024-07-26 23:31:53.739985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:02.166 [2024-07-26 23:31:53.739994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:02.166 [2024-07-26 23:31:53.740006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:02.166 [2024-07-26 23:31:53.740015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:02.166 [2024-07-26 23:31:53.740027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.166 [2024-07-26 23:31:53.740036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:02.166 [2024-07-26 23:31:53.740050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:02.166 [2024-07-26 23:31:53.740058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.166 [2024-07-26 23:31:53.740070] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:02.166 [2024-07-26 23:31:53.740081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:02.166 [2024-07-26 23:31:53.740094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.166 [2024-07-26 23:31:53.740104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.166 [2024-07-26 23:31:53.740117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:02.166 [2024-07-26 23:31:53.740128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:02.166 [2024-07-26 23:31:53.740140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:02.166 [2024-07-26 23:31:53.740150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:02.166 [2024-07-26 23:31:53.740165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:02.166 [2024-07-26 23:31:53.740175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:02.166 [2024-07-26 23:31:53.740187] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:02.166 [2024-07-26 23:31:53.740200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.166 [2024-07-26 23:31:53.740218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:02.166 [2024-07-26 23:31:53.740228] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:02.166 [2024-07-26 23:31:53.740241] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:02.166 [2024-07-26 23:31:53.740251] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:02.166 [2024-07-26 23:31:53.740265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:02.166 [2024-07-26 23:31:53.740276] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:02.166 [2024-07-26 23:31:53.740289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:02.166 [2024-07-26 23:31:53.740299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:02.166 [2024-07-26 23:31:53.740312] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:02.166 [2024-07-26 23:31:53.740323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:02.166 [2024-07-26 23:31:53.740335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:02.166 [2024-07-26 23:31:53.740345] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:02.166 [2024-07-26 23:31:53.740363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:02.166 [2024-07-26 23:31:53.740373] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:02.166 [2024-07-26 23:31:53.740386] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.166 [2024-07-26 23:31:53.740397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.166 [2024-07-26 23:31:53.740410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:02.166 [2024-07-26 23:31:53.740420] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:02.166 [2024-07-26 23:31:53.740433] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:02.166 [2024-07-26 23:31:53.740443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.740457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:02.166 [2024-07-26 23:31:53.740467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:24:02.166 [2024-07-26 23:31:53.740480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.769816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.769853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.166 [2024-07-26 23:31:53.769867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.337 ms 00:24:02.166 [2024-07-26 23:31:53.769895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.769992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.770008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.166 [2024-07-26 23:31:53.770019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:02.166 [2024-07-26 23:31:53.770032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.827891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.827928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.166 [2024-07-26 23:31:53.827963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.901 ms 00:24:02.166 [2024-07-26 23:31:53.827977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.828171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.828194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.166 [2024-07-26 23:31:53.828206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.166 [2024-07-26 23:31:53.828219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.828993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.829015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.166 [2024-07-26 23:31:53.829027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:24:02.166 [2024-07-26 23:31:53.829040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.829154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.829174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.166 [2024-07-26 23:31:53.829185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:24:02.166 [2024-07-26 23:31:53.829197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.856848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.856887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.166 [2024-07-26 23:31:53.856900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.675 ms 00:24:02.166 [2024-07-26 23:31:53.856914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.166 [2024-07-26 23:31:53.871536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.166 [2024-07-26 23:31:53.876547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.166 [2024-07-26 23:31:53.876578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.166 [2024-07-26 23:31:53.876593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.560 ms 00:24:02.166 [2024-07-26 23:31:53.876619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.424 [2024-07-26 23:31:54.071714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.424 [2024-07-26 23:31:54.071771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:02.424 [2024-07-26 23:31:54.071792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.375 ms 00:24:02.424 [2024-07-26 23:31:54.071803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.424 [2024-07-26 23:31:54.071858] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:24:02.424 [2024-07-26 23:31:54.071873] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:24:08.997 [2024-07-26 23:32:00.399539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.399614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:08.997 [2024-07-26 23:32:00.399636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6337.955 ms 00:24:08.997 [2024-07-26 23:32:00.399647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.399867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.399880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:08.997 [2024-07-26 23:32:00.399895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:24:08.997 [2024-07-26 23:32:00.399905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.436135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.436172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:08.997 [2024-07-26 23:32:00.436188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.224 ms 00:24:08.997 [2024-07-26 23:32:00.436198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.470940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.470981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:08.997 [2024-07-26 23:32:00.471002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.753 ms 00:24:08.997 [2024-07-26 23:32:00.471012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.471463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.471477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:08.997 [2024-07-26 23:32:00.471509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:24:08.997 [2024-07-26 23:32:00.471519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.567458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.567493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:08.997 [2024-07-26 23:32:00.567510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.042 ms 00:24:08.997 [2024-07-26 23:32:00.567520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.605385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.605421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:08.997 [2024-07-26 23:32:00.605438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.883 ms 00:24:08.997 [2024-07-26 23:32:00.605451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.607793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.607821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:08.997 [2024-07-26 23:32:00.607840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:24:08.997 [2024-07-26 23:32:00.607850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.643825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.643858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:08.997 [2024-07-26 23:32:00.643875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.979 ms 00:24:08.997 [2024-07-26 23:32:00.643884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.643938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.643957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:08.997 [2024-07-26 23:32:00.644001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:08.997 [2024-07-26 23:32:00.644011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.644123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:08.997 [2024-07-26 23:32:00.644136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:08.997 [2024-07-26 23:32:00.644169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:08.997 [2024-07-26 23:32:00.644179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:08.997 [2024-07-26 23:32:00.645550] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6941.862 ms, result 0 00:24:08.997 { 00:24:08.997 "name": "ftl0", 00:24:08.997 "uuid": "aa1159e5-ef10-4e53-96ba-e9c2ed419c79" 00:24:08.997 } 00:24:08.997 23:32:00 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:08.997 23:32:00 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:09.256 23:32:00 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:09.256 23:32:00 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:09.256 23:32:00 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:09.514 /dev/nbd0 00:24:09.514 23:32:01 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:09.514 23:32:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:09.514 23:32:01 -- common/autotest_common.sh@857 -- # local i 00:24:09.514 23:32:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:09.514 23:32:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:09.514 23:32:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:09.514 23:32:01 -- common/autotest_common.sh@861 -- # break 00:24:09.514 23:32:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:09.514 23:32:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:09.515 23:32:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:09.515 1+0 records in 00:24:09.515 1+0 records out 00:24:09.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577031 s, 7.1 MB/s 00:24:09.515 23:32:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:09.515 23:32:01 -- common/autotest_common.sh@874 -- # size=4096 00:24:09.515 23:32:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:09.515 23:32:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:09.515 23:32:01 -- common/autotest_common.sh@877 -- # return 0 00:24:09.515 23:32:01 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:09.515 [2024-07-26 23:32:01.169603] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:09.515 [2024-07-26 23:32:01.169715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76857 ] 00:24:09.773 [2024-07-26 23:32:01.340705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.032 [2024-07-26 23:32:01.549702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.163  Copying: 228/1024 [MB] (228 MBps) Copying: 457/1024 [MB] (229 MBps) Copying: 685/1024 [MB] (228 MBps) Copying: 907/1024 [MB] (221 MBps) Copying: 1024/1024 [MB] (average 227 MBps) 00:24:16.163 00:24:16.163 23:32:07 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:18.068 23:32:09 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:18.068 [2024-07-26 23:32:09.408719] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:18.068 [2024-07-26 23:32:09.408839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76947 ] 00:24:18.068 [2024-07-26 23:32:09.580435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.068 [2024-07-26 23:32:09.793830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.060  Copying: 14/1024 [MB] (14 MBps) Copying: 29/1024 [MB] (14 MBps) Copying: 43/1024 [MB] (14 MBps) Copying: 58/1024 [MB] (14 MBps) Copying: 73/1024 [MB] (15 MBps) Copying: 89/1024 [MB] (15 MBps) Copying: 105/1024 [MB] (15 MBps) Copying: 119/1024 [MB] (14 MBps) Copying: 134/1024 [MB] (14 MBps) Copying: 149/1024 [MB] (15 MBps) Copying: 165/1024 [MB] (15 MBps) Copying: 180/1024 [MB] (15 MBps) Copying: 195/1024 [MB] (14 MBps) Copying: 211/1024 [MB] (15 MBps) Copying: 227/1024 [MB] (16 MBps) Copying: 243/1024 [MB] (16 MBps) Copying: 260/1024 [MB] (16 MBps) Copying: 276/1024 [MB] (16 MBps) Copying: 292/1024 [MB] (16 MBps) Copying: 309/1024 [MB] (16 MBps) Copying: 325/1024 [MB] (16 MBps) Copying: 341/1024 [MB] (16 MBps) Copying: 357/1024 [MB] (16 MBps) Copying: 374/1024 [MB] (16 MBps) Copying: 390/1024 [MB] (15 MBps) Copying: 406/1024 [MB] (16 MBps) Copying: 422/1024 [MB] (16 MBps) Copying: 438/1024 [MB] (16 MBps) Copying: 455/1024 [MB] (16 MBps) Copying: 471/1024 [MB] (16 MBps) Copying: 487/1024 [MB] (16 MBps) Copying: 504/1024 [MB] (16 MBps) Copying: 520/1024 [MB] (16 MBps) Copying: 536/1024 [MB] (16 MBps) Copying: 553/1024 [MB] (16 MBps) Copying: 569/1024 [MB] (16 MBps) Copying: 585/1024 [MB] (16 MBps) Copying: 601/1024 [MB] (16 MBps) Copying: 618/1024 [MB] (16 MBps) Copying: 634/1024 [MB] (16 MBps) Copying: 650/1024 [MB] (16 MBps) Copying: 667/1024 [MB] (16 MBps) Copying: 684/1024 [MB] (16 MBps) Copying: 700/1024 [MB] (16 MBps) Copying: 717/1024 [MB] (16 MBps) Copying: 734/1024 [MB] (16 MBps) Copying: 751/1024 [MB] (17 MBps) Copying: 768/1024 [MB] (17 MBps) Copying: 785/1024 [MB] (17 MBps) Copying: 802/1024 [MB] (17 MBps) Copying: 819/1024 [MB] (16 MBps) Copying: 835/1024 [MB] (16 MBps) Copying: 852/1024 [MB] (16 MBps) Copying: 870/1024 [MB] (17 MBps) Copying: 886/1024 [MB] (16 MBps) Copying: 902/1024 [MB] (15 MBps) Copying: 919/1024 [MB] (16 MBps) Copying: 935/1024 [MB] (16 MBps) Copying: 951/1024 [MB] (16 MBps) Copying: 968/1024 [MB] (16 MBps) Copying: 984/1024 [MB] (16 MBps) Copying: 1001/1024 [MB] (16 MBps) Copying: 1017/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps) 00:25:23.060 00:25:23.060 23:33:14 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:23.060 23:33:14 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:23.060 23:33:14 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:23.318 [2024-07-26 23:33:14.956908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.318 [2024-07-26 23:33:14.956988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:23.318 [2024-07-26 23:33:14.957007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.318 [2024-07-26 23:33:14.957026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.318 [2024-07-26 23:33:14.957100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:23.318 [2024-07-26 23:33:14.961416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.318 [2024-07-26 23:33:14.961450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:23.318 [2024-07-26 23:33:14.961467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:25:23.318 [2024-07-26 23:33:14.961478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.318 [2024-07-26 23:33:14.963755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.318 [2024-07-26 23:33:14.963800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:23.318 [2024-07-26 23:33:14.963822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:25:23.318 [2024-07-26 23:33:14.963833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.318 [2024-07-26 23:33:14.981932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.319 [2024-07-26 23:33:14.981980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:23.319 [2024-07-26 23:33:14.981999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.096 ms 00:25:23.319 [2024-07-26 23:33:14.982010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.319 [2024-07-26 23:33:14.987179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.319 [2024-07-26 23:33:14.987213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:23.319 [2024-07-26 23:33:14.987228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.124 ms 00:25:23.319 [2024-07-26 23:33:14.987238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.319 [2024-07-26 23:33:15.023591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.319 [2024-07-26 23:33:15.023628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:23.319 [2024-07-26 23:33:15.023645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.312 ms 00:25:23.319 [2024-07-26 23:33:15.023654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.319 [2024-07-26 23:33:15.046374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.319 [2024-07-26 23:33:15.046511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:23.319 [2024-07-26 23:33:15.046654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.710 ms 00:25:23.319 [2024-07-26 23:33:15.046694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.319 [2024-07-26 23:33:15.046876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.319 [2024-07-26 23:33:15.047019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:23.319 [2024-07-26 23:33:15.047039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:25:23.319 [2024-07-26 23:33:15.047049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.580 [2024-07-26 23:33:15.083870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.580 [2024-07-26 23:33:15.083910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:23.580 [2024-07-26 23:33:15.083927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.848 ms 00:25:23.580 [2024-07-26 23:33:15.083952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.580 [2024-07-26 23:33:15.121651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.580 [2024-07-26 23:33:15.122144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:23.580 [2024-07-26 23:33:15.122252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.558 ms 00:25:23.580 [2024-07-26 23:33:15.122282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.580 [2024-07-26 23:33:15.168514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.580 [2024-07-26 23:33:15.168572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:23.580 [2024-07-26 23:33:15.168593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.161 ms 00:25:23.580 [2024-07-26 23:33:15.168604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.580 [2024-07-26 23:33:15.203586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.580 [2024-07-26 23:33:15.203628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:23.580 [2024-07-26 23:33:15.203646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.921 ms 00:25:23.580 [2024-07-26 23:33:15.203658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.580 [2024-07-26 23:33:15.203712] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:23.580 [2024-07-26 23:33:15.203731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.203998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:23.580 [2024-07-26 23:33:15.204340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.204981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:23.581 [2024-07-26 23:33:15.205371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:23.581 [2024-07-26 23:33:15.205394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa1159e5-ef10-4e53-96ba-e9c2ed419c79 00:25:23.581 [2024-07-26 23:33:15.205407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:23.581 [2024-07-26 23:33:15.205421] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:23.581 [2024-07-26 23:33:15.205436] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:23.581 [2024-07-26 23:33:15.205450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:23.581 [2024-07-26 23:33:15.205461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:23.581 [2024-07-26 23:33:15.205477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:23.581 [2024-07-26 23:33:15.205488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:23.581 [2024-07-26 23:33:15.205501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:23.581 [2024-07-26 23:33:15.205511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:23.581 [2024-07-26 23:33:15.205528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.581 [2024-07-26 23:33:15.205540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:23.581 [2024-07-26 23:33:15.205554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:25:23.581 [2024-07-26 23:33:15.205565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.581 [2024-07-26 23:33:15.223946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.581 [2024-07-26 23:33:15.224001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:23.581 [2024-07-26 23:33:15.224019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.345 ms 00:25:23.581 [2024-07-26 23:33:15.224031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.581 [2024-07-26 23:33:15.224251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.581 [2024-07-26 23:33:15.224266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:23.581 [2024-07-26 23:33:15.224281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:25:23.581 [2024-07-26 23:33:15.224291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.581 [2024-07-26 23:33:15.289355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.581 [2024-07-26 23:33:15.289395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.581 [2024-07-26 23:33:15.289413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.581 [2024-07-26 23:33:15.289425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.581 [2024-07-26 23:33:15.289495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.581 [2024-07-26 23:33:15.289508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.582 [2024-07-26 23:33:15.289523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.582 [2024-07-26 23:33:15.289535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.582 [2024-07-26 23:33:15.289625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.582 [2024-07-26 23:33:15.289643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.582 [2024-07-26 23:33:15.289658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.582 [2024-07-26 23:33:15.289671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.582 [2024-07-26 23:33:15.289698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.582 [2024-07-26 23:33:15.289710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.582 [2024-07-26 23:33:15.289724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.582 [2024-07-26 23:33:15.289736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.404052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.404105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.872 [2024-07-26 23:33:15.404123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.404136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.872 [2024-07-26 23:33:15.447167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.872 [2024-07-26 23:33:15.447295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.872 [2024-07-26 23:33:15.447400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.872 [2024-07-26 23:33:15.447576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:23.872 [2024-07-26 23:33:15.447670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.872 [2024-07-26 23:33:15.447759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.447832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:23.872 [2024-07-26 23:33:15.447844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.872 [2024-07-26 23:33:15.447858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:23.872 [2024-07-26 23:33:15.447876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.872 [2024-07-26 23:33:15.448070] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 491.883 ms, result 0 00:25:23.872 true 00:25:23.872 23:33:15 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76672 00:25:23.872 23:33:15 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76672 00:25:23.872 23:33:15 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:23.872 [2024-07-26 23:33:15.582103] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:23.872 [2024-07-26 23:33:15.582215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77620 ] 00:25:24.136 [2024-07-26 23:33:15.757138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.395 [2024-07-26 23:33:15.966591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.895  Copying: 213/1024 [MB] (213 MBps) Copying: 431/1024 [MB] (217 MBps) Copying: 649/1024 [MB] (218 MBps) Copying: 856/1024 [MB] (207 MBps) Copying: 1024/1024 [MB] (average 213 MBps) 00:25:30.895 00:25:30.895 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76672 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:30.895 23:33:22 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.895 [2024-07-26 23:33:22.419951] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:30.895 [2024-07-26 23:33:22.420096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77693 ] 00:25:30.895 [2024-07-26 23:33:22.595371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.154 [2024-07-26 23:33:22.801332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.721 [2024-07-26 23:33:23.180719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.721 [2024-07-26 23:33:23.180797] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.721 [2024-07-26 23:33:23.244202] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:31.721 [2024-07-26 23:33:23.244528] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:31.721 [2024-07-26 23:33:23.244768] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:31.979 [2024-07-26 23:33:23.572151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.572197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.979 [2024-07-26 23:33:23.572215] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:31.979 [2024-07-26 23:33:23.572227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.979 [2024-07-26 23:33:23.572278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.572291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.979 [2024-07-26 23:33:23.572303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:31.979 [2024-07-26 23:33:23.572314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.979 [2024-07-26 23:33:23.572342] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.979 [2024-07-26 23:33:23.573510] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.979 [2024-07-26 23:33:23.573547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.573558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.979 [2024-07-26 23:33:23.573574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:25:31.979 [2024-07-26 23:33:23.573585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.979 [2024-07-26 23:33:23.575069] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.979 [2024-07-26 23:33:23.593623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.593683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.979 [2024-07-26 23:33:23.593700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.584 ms 00:25:31.979 [2024-07-26 23:33:23.593711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.979 [2024-07-26 23:33:23.593775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.593788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.979 [2024-07-26 23:33:23.593800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:31.979 [2024-07-26 23:33:23.593815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.979 [2024-07-26 23:33:23.600759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.979 [2024-07-26 23:33:23.600791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.979 [2024-07-26 23:33:23.600805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:25:31.980 [2024-07-26 23:33:23.600816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.600903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.600919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.980 [2024-07-26 23:33:23.600936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:31.980 [2024-07-26 23:33:23.600947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.601015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.601029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.980 [2024-07-26 23:33:23.601041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:31.980 [2024-07-26 23:33:23.601051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.601082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:31.980 [2024-07-26 23:33:23.606382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.606420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.980 [2024-07-26 23:33:23.606433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:25:31.980 [2024-07-26 23:33:23.606445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.606481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.606493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.980 [2024-07-26 23:33:23.606509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:31.980 [2024-07-26 23:33:23.606520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.606573] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.980 [2024-07-26 23:33:23.606611] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:31.980 [2024-07-26 23:33:23.606643] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.980 [2024-07-26 23:33:23.606663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:31.980 [2024-07-26 23:33:23.606726] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:31.980 [2024-07-26 23:33:23.606745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.980 [2024-07-26 23:33:23.606759] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:31.980 [2024-07-26 23:33:23.606773] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.980 [2024-07-26 23:33:23.606786] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.980 [2024-07-26 23:33:23.606798] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:31.980 [2024-07-26 23:33:23.606809] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.980 [2024-07-26 23:33:23.606820] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:31.980 [2024-07-26 23:33:23.606830] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:31.980 [2024-07-26 23:33:23.606841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.606853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.980 [2024-07-26 23:33:23.606869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:25:31.980 [2024-07-26 23:33:23.606879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.606934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.980 [2024-07-26 23:33:23.606946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.980 [2024-07-26 23:33:23.606957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:31.980 [2024-07-26 23:33:23.606988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.980 [2024-07-26 23:33:23.607055] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.980 [2024-07-26 23:33:23.607070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.980 [2024-07-26 23:33:23.607082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.980 [2024-07-26 23:33:23.607119] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.980 [2024-07-26 23:33:23.607152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.980 [2024-07-26 23:33:23.607175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.980 [2024-07-26 23:33:23.607185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:31.980 [2024-07-26 23:33:23.607196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.980 [2024-07-26 23:33:23.607206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.980 [2024-07-26 23:33:23.607216] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:31.980 [2024-07-26 23:33:23.607226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.980 [2024-07-26 23:33:23.607257] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:31.980 [2024-07-26 23:33:23.607267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:31.980 [2024-07-26 23:33:23.607289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:31.980 [2024-07-26 23:33:23.607299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.980 [2024-07-26 23:33:23.607319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.980 [2024-07-26 23:33:23.607349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.980 [2024-07-26 23:33:23.607379] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.980 [2024-07-26 23:33:23.607408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607427] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.980 [2024-07-26 23:33:23.607436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.980 [2024-07-26 23:33:23.607455] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.980 [2024-07-26 23:33:23.607465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:31.980 [2024-07-26 23:33:23.607475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.980 [2024-07-26 23:33:23.607484] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.980 [2024-07-26 23:33:23.607495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.980 [2024-07-26 23:33:23.607505] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.980 [2024-07-26 23:33:23.607535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.980 [2024-07-26 23:33:23.607545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.980 [2024-07-26 23:33:23.607555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.980 [2024-07-26 23:33:23.607565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.980 [2024-07-26 23:33:23.607575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.980 [2024-07-26 23:33:23.607585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.980 [2024-07-26 23:33:23.607596] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.980 [2024-07-26 23:33:23.607608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.980 [2024-07-26 23:33:23.607621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:31.980 [2024-07-26 23:33:23.607633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:31.980 [2024-07-26 23:33:23.607644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:31.980 [2024-07-26 23:33:23.607656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:31.980 [2024-07-26 23:33:23.607667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:31.980 [2024-07-26 23:33:23.607679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:31.980 [2024-07-26 23:33:23.607690] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:31.980 [2024-07-26 23:33:23.607702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:31.980 [2024-07-26 23:33:23.607712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:31.981 [2024-07-26 23:33:23.607723] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:31.981 [2024-07-26 23:33:23.607734] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:31.981 [2024-07-26 23:33:23.607745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:31.981 [2024-07-26 23:33:23.607758] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:31.981 [2024-07-26 23:33:23.607768] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.981 [2024-07-26 23:33:23.607780] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.981 [2024-07-26 23:33:23.607792] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.981 [2024-07-26 23:33:23.607803] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.981 [2024-07-26 23:33:23.607815] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.981 [2024-07-26 23:33:23.607826] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.981 [2024-07-26 23:33:23.607837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.607848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.981 [2024-07-26 23:33:23.607867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.816 ms 00:25:31.981 [2024-07-26 23:33:23.607879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.632927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.632989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.981 [2024-07-26 23:33:23.633010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.037 ms 00:25:31.981 [2024-07-26 23:33:23.633023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.633105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.633118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.981 [2024-07-26 23:33:23.633130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:31.981 [2024-07-26 23:33:23.633141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.712231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.712267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.981 [2024-07-26 23:33:23.712283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.161 ms 00:25:31.981 [2024-07-26 23:33:23.712294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.712337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.712350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.981 [2024-07-26 23:33:23.712362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:31.981 [2024-07-26 23:33:23.712373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.712849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.712876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.981 [2024-07-26 23:33:23.712888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:31.981 [2024-07-26 23:33:23.712899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.981 [2024-07-26 23:33:23.713029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.981 [2024-07-26 23:33:23.713044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.981 [2024-07-26 23:33:23.713055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:31.981 [2024-07-26 23:33:23.713066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.734363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.734398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.240 [2024-07-26 23:33:23.734412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.307 ms 00:25:32.240 [2024-07-26 23:33:23.734424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.753197] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:32.240 [2024-07-26 23:33:23.753234] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:32.240 [2024-07-26 23:33:23.753255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.753267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:32.240 [2024-07-26 23:33:23.753281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.749 ms 00:25:32.240 [2024-07-26 23:33:23.753291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.783635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.783676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:32.240 [2024-07-26 23:33:23.783691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.346 ms 00:25:32.240 [2024-07-26 23:33:23.783702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.801617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.801666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:32.240 [2024-07-26 23:33:23.801681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.902 ms 00:25:32.240 [2024-07-26 23:33:23.801693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.819079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.819117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:32.240 [2024-07-26 23:33:23.819144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.369 ms 00:25:32.240 [2024-07-26 23:33:23.819154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.819595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.819619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:32.240 [2024-07-26 23:33:23.819633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:25:32.240 [2024-07-26 23:33:23.819644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.907839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.907881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.240 [2024-07-26 23:33:23.907906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.317 ms 00:25:32.240 [2024-07-26 23:33:23.907918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.919240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:32.240 [2024-07-26 23:33:23.921655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.921691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.240 [2024-07-26 23:33:23.921704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:25:32.240 [2024-07-26 23:33:23.921715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.921784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.921798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.240 [2024-07-26 23:33:23.921811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:32.240 [2024-07-26 23:33:23.921822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.921897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.921911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.240 [2024-07-26 23:33:23.921928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:32.240 [2024-07-26 23:33:23.921939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.924211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.924244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:32.240 [2024-07-26 23:33:23.924257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:25:32.240 [2024-07-26 23:33:23.924269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.924308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.924322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.240 [2024-07-26 23:33:23.924334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:32.240 [2024-07-26 23:33:23.924345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.924389] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.240 [2024-07-26 23:33:23.924404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.924415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.240 [2024-07-26 23:33:23.924426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:32.240 [2024-07-26 23:33:23.924437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.960447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.960486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.240 [2024-07-26 23:33:23.960508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.046 ms 00:25:32.240 [2024-07-26 23:33:23.960519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.960589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.240 [2024-07-26 23:33:23.960602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.240 [2024-07-26 23:33:23.960614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:32.240 [2024-07-26 23:33:23.960626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.240 [2024-07-26 23:33:23.961711] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.719 ms, result 0 00:26:19.132  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (22 MBps) Copying: 65/1024 [MB] (22 MBps) Copying: 87/1024 [MB] (22 MBps) Copying: 109/1024 [MB] (22 MBps) Copying: 131/1024 [MB] (21 MBps) Copying: 153/1024 [MB] (21 MBps) Copying: 175/1024 [MB] (22 MBps) Copying: 197/1024 [MB] (22 MBps) Copying: 219/1024 [MB] (22 MBps) Copying: 242/1024 [MB] (22 MBps) Copying: 263/1024 [MB] (21 MBps) Copying: 285/1024 [MB] (21 MBps) Copying: 308/1024 [MB] (22 MBps) Copying: 330/1024 [MB] (22 MBps) Copying: 353/1024 [MB] (22 MBps) Copying: 376/1024 [MB] (22 MBps) Copying: 398/1024 [MB] (22 MBps) Copying: 421/1024 [MB] (22 MBps) Copying: 443/1024 [MB] (22 MBps) Copying: 466/1024 [MB] (22 MBps) Copying: 489/1024 [MB] (22 MBps) Copying: 511/1024 [MB] (22 MBps) Copying: 534/1024 [MB] (22 MBps) Copying: 557/1024 [MB] (22 MBps) Copying: 579/1024 [MB] (22 MBps) Copying: 602/1024 [MB] (22 MBps) Copying: 624/1024 [MB] (22 MBps) Copying: 647/1024 [MB] (22 MBps) Copying: 669/1024 [MB] (22 MBps) Copying: 691/1024 [MB] (21 MBps) Copying: 713/1024 [MB] (22 MBps) Copying: 735/1024 [MB] (22 MBps) Copying: 757/1024 [MB] (22 MBps) Copying: 780/1024 [MB] (22 MBps) Copying: 802/1024 [MB] (21 MBps) Copying: 824/1024 [MB] (22 MBps) Copying: 846/1024 [MB] (21 MBps) Copying: 868/1024 [MB] (21 MBps) Copying: 890/1024 [MB] (21 MBps) Copying: 912/1024 [MB] (21 MBps) Copying: 934/1024 [MB] (22 MBps) Copying: 956/1024 [MB] (22 MBps) Copying: 978/1024 [MB] (22 MBps) Copying: 1001/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 21 MBps)[2024-07-26 23:34:10.620034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.620091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:19.132 [2024-07-26 23:34:10.620108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:19.132 [2024-07-26 23:34:10.620120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.621848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:19.132 [2024-07-26 23:34:10.628754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.628794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:19.132 [2024-07-26 23:34:10.628810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:26:19.132 [2024-07-26 23:34:10.628822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.638320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.638360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:19.132 [2024-07-26 23:34:10.638374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.852 ms 00:26:19.132 [2024-07-26 23:34:10.638386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.660583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.660625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:19.132 [2024-07-26 23:34:10.660642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.212 ms 00:26:19.132 [2024-07-26 23:34:10.660654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.665416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.665460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:19.132 [2024-07-26 23:34:10.665473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:26:19.132 [2024-07-26 23:34:10.665484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.701543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.701593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:19.132 [2024-07-26 23:34:10.701608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.053 ms 00:26:19.132 [2024-07-26 23:34:10.701620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.722916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.722955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:19.132 [2024-07-26 23:34:10.722977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.281 ms 00:26:19.132 [2024-07-26 23:34:10.722989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.132 [2024-07-26 23:34:10.852347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.132 [2024-07-26 23:34:10.852389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:19.132 [2024-07-26 23:34:10.852404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.525 ms 00:26:19.132 [2024-07-26 23:34:10.852423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.393 [2024-07-26 23:34:10.889011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.393 [2024-07-26 23:34:10.889048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:19.393 [2024-07-26 23:34:10.889063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.628 ms 00:26:19.393 [2024-07-26 23:34:10.889074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.393 [2024-07-26 23:34:10.924946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.393 [2024-07-26 23:34:10.924989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:19.393 [2024-07-26 23:34:10.925003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.889 ms 00:26:19.393 [2024-07-26 23:34:10.925028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.393 [2024-07-26 23:34:10.959898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.393 [2024-07-26 23:34:10.959935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:19.393 [2024-07-26 23:34:10.959949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.885 ms 00:26:19.393 [2024-07-26 23:34:10.959959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.393 [2024-07-26 23:34:10.994794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.393 [2024-07-26 23:34:10.994831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:19.393 [2024-07-26 23:34:10.994845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.793 ms 00:26:19.393 [2024-07-26 23:34:10.994856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.393 [2024-07-26 23:34:10.994895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:19.393 [2024-07-26 23:34:10.994911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:26:19.393 [2024-07-26 23:34:10.994925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.994938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.994950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.994973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.994986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.994998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:19.393 [2024-07-26 23:34:10.995404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.995997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:19.394 [2024-07-26 23:34:10.996130] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:19.394 [2024-07-26 23:34:10.996148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa1159e5-ef10-4e53-96ba-e9c2ed419c79 00:26:19.394 [2024-07-26 23:34:10.996160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:26:19.394 [2024-07-26 23:34:10.996175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:26:19.394 [2024-07-26 23:34:10.996185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:26:19.394 [2024-07-26 23:34:10.996197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:26:19.394 [2024-07-26 23:34:10.996207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:19.394 [2024-07-26 23:34:10.996219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:19.394 [2024-07-26 23:34:10.996230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:19.394 [2024-07-26 23:34:10.996240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:19.395 [2024-07-26 23:34:10.996262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:19.395 [2024-07-26 23:34:10.996273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.395 [2024-07-26 23:34:10.996284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:19.395 [2024-07-26 23:34:10.996295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:26:19.395 [2024-07-26 23:34:10.996306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.013475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.395 [2024-07-26 23:34:11.013508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:19.395 [2024-07-26 23:34:11.013522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.148 ms 00:26:19.395 [2024-07-26 23:34:11.013532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.013745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.395 [2024-07-26 23:34:11.013759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:19.395 [2024-07-26 23:34:11.013770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:26:19.395 [2024-07-26 23:34:11.013781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.062840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.395 [2024-07-26 23:34:11.062876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.395 [2024-07-26 23:34:11.062890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.395 [2024-07-26 23:34:11.062902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.062954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.395 [2024-07-26 23:34:11.062980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.395 [2024-07-26 23:34:11.062993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.395 [2024-07-26 23:34:11.063005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.063086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.395 [2024-07-26 23:34:11.063100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.395 [2024-07-26 23:34:11.063111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.395 [2024-07-26 23:34:11.063122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.395 [2024-07-26 23:34:11.063141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.395 [2024-07-26 23:34:11.063152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.395 [2024-07-26 23:34:11.063162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.395 [2024-07-26 23:34:11.063173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.167166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.167216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.654 [2024-07-26 23:34:11.167232] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.167244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.654 [2024-07-26 23:34:11.207433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.207445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.654 [2024-07-26 23:34:11.207550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.207561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.654 [2024-07-26 23:34:11.207632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.207642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.654 [2024-07-26 23:34:11.207774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.207784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:19.654 [2024-07-26 23:34:11.207853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.654 [2024-07-26 23:34:11.207864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.654 [2024-07-26 23:34:11.207915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.654 [2024-07-26 23:34:11.207933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.655 [2024-07-26 23:34:11.207944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.655 [2024-07-26 23:34:11.207955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.655 [2024-07-26 23:34:11.208023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:19.655 [2024-07-26 23:34:11.208036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.655 [2024-07-26 23:34:11.208047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:19.655 [2024-07-26 23:34:11.208058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.655 [2024-07-26 23:34:11.208217] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.738 ms, result 0 00:26:21.555 00:26:21.556 00:26:21.556 23:34:13 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:23.457 23:34:14 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:23.457 [2024-07-26 23:34:14.869106] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:23.457 [2024-07-26 23:34:14.869225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78224 ] 00:26:23.457 [2024-07-26 23:34:15.043713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.715 [2024-07-26 23:34:15.258410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.974 [2024-07-26 23:34:15.639098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:23.974 [2024-07-26 23:34:15.639168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:24.234 [2024-07-26 23:34:15.793851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.234 [2024-07-26 23:34:15.793905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:24.234 [2024-07-26 23:34:15.793923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.234 [2024-07-26 23:34:15.793934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.794000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.794013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:24.235 [2024-07-26 23:34:15.794025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:24.235 [2024-07-26 23:34:15.794036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.794060] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:24.235 [2024-07-26 23:34:15.795170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:24.235 [2024-07-26 23:34:15.795208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.795221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:24.235 [2024-07-26 23:34:15.795234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:26:24.235 [2024-07-26 23:34:15.795245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.796743] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:24.235 [2024-07-26 23:34:15.815396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.815440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:24.235 [2024-07-26 23:34:15.815462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.684 ms 00:26:24.235 [2024-07-26 23:34:15.815473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.815535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.815548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:24.235 [2024-07-26 23:34:15.815561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:24.235 [2024-07-26 23:34:15.815573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.822543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.822573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:24.235 [2024-07-26 23:34:15.822586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:26:24.235 [2024-07-26 23:34:15.822597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.822685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.822700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:24.235 [2024-07-26 23:34:15.822712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:24.235 [2024-07-26 23:34:15.822724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.822765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.822782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:24.235 [2024-07-26 23:34:15.822793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.235 [2024-07-26 23:34:15.822803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.822833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:24.235 [2024-07-26 23:34:15.828459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.828506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:24.235 [2024-07-26 23:34:15.828520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.643 ms 00:26:24.235 [2024-07-26 23:34:15.828532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.828569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.828580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:24.235 [2024-07-26 23:34:15.828592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:24.235 [2024-07-26 23:34:15.828603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.828656] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:24.235 [2024-07-26 23:34:15.828686] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:26:24.235 [2024-07-26 23:34:15.828720] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:24.235 [2024-07-26 23:34:15.828737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:26:24.235 [2024-07-26 23:34:15.828819] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:26:24.235 [2024-07-26 23:34:15.828835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:24.235 [2024-07-26 23:34:15.828850] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:26:24.235 [2024-07-26 23:34:15.828864] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:24.235 [2024-07-26 23:34:15.828877] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:24.235 [2024-07-26 23:34:15.828893] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:24.235 [2024-07-26 23:34:15.828905] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:24.235 [2024-07-26 23:34:15.828917] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:26:24.235 [2024-07-26 23:34:15.828929] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:26:24.235 [2024-07-26 23:34:15.828941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.828953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:24.235 [2024-07-26 23:34:15.828965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:24.235 [2024-07-26 23:34:15.828977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.829051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.235 [2024-07-26 23:34:15.829069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:24.235 [2024-07-26 23:34:15.829085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:26:24.235 [2024-07-26 23:34:15.829097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.235 [2024-07-26 23:34:15.829165] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:24.235 [2024-07-26 23:34:15.829185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:24.235 [2024-07-26 23:34:15.829197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:24.235 [2024-07-26 23:34:15.829233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:24.235 [2024-07-26 23:34:15.829266] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.235 [2024-07-26 23:34:15.829287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:24.235 [2024-07-26 23:34:15.829298] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:24.235 [2024-07-26 23:34:15.829308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:24.235 [2024-07-26 23:34:15.829320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:24.235 [2024-07-26 23:34:15.829331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:26:24.235 [2024-07-26 23:34:15.829341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:24.235 [2024-07-26 23:34:15.829362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:26:24.235 [2024-07-26 23:34:15.829372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:26:24.235 [2024-07-26 23:34:15.829392] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:26:24.235 [2024-07-26 23:34:15.829415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:24.235 [2024-07-26 23:34:15.829436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829447] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829458] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:24.235 [2024-07-26 23:34:15.829469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:24.235 [2024-07-26 23:34:15.829500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829520] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:24.235 [2024-07-26 23:34:15.829532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:26:24.235 [2024-07-26 23:34:15.829553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:24.235 [2024-07-26 23:34:15.829563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:24.235 [2024-07-26 23:34:15.829574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.235 [2024-07-26 23:34:15.829585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:24.235 [2024-07-26 23:34:15.829595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:26:24.235 [2024-07-26 23:34:15.829605] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:24.235 [2024-07-26 23:34:15.829615] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:24.235 [2024-07-26 23:34:15.829626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:24.236 [2024-07-26 23:34:15.829639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:24.236 [2024-07-26 23:34:15.829656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:24.236 [2024-07-26 23:34:15.829668] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:24.236 [2024-07-26 23:34:15.829678] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:24.236 [2024-07-26 23:34:15.829689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:24.236 [2024-07-26 23:34:15.829699] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:24.236 [2024-07-26 23:34:15.829709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:24.236 [2024-07-26 23:34:15.829720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:24.236 [2024-07-26 23:34:15.829731] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:24.236 [2024-07-26 23:34:15.829745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.236 [2024-07-26 23:34:15.829757] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:24.236 [2024-07-26 23:34:15.829769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:26:24.236 [2024-07-26 23:34:15.829780] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:26:24.236 [2024-07-26 23:34:15.829791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:26:24.236 [2024-07-26 23:34:15.829803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:26:24.236 [2024-07-26 23:34:15.829815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:26:24.236 [2024-07-26 23:34:15.829827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:26:24.236 [2024-07-26 23:34:15.829838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:26:24.236 [2024-07-26 23:34:15.829850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:26:24.236 [2024-07-26 23:34:15.829861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:26:24.236 [2024-07-26 23:34:15.829873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:26:24.236 [2024-07-26 23:34:15.829885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:26:24.236 [2024-07-26 23:34:15.829898] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:26:24.236 [2024-07-26 23:34:15.829909] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:24.236 [2024-07-26 23:34:15.829922] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:24.236 [2024-07-26 23:34:15.829935] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:24.236 [2024-07-26 23:34:15.829946] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:24.236 [2024-07-26 23:34:15.829957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:24.236 [2024-07-26 23:34:15.829986] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:24.236 [2024-07-26 23:34:15.829999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.830012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:24.236 [2024-07-26 23:34:15.830024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:26:24.236 [2024-07-26 23:34:15.830037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.853501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.853538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:24.236 [2024-07-26 23:34:15.853553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.458 ms 00:26:24.236 [2024-07-26 23:34:15.853564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.853639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.853656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:24.236 [2024-07-26 23:34:15.853668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:24.236 [2024-07-26 23:34:15.853679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.915750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.915787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.236 [2024-07-26 23:34:15.915821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.119 ms 00:26:24.236 [2024-07-26 23:34:15.915839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.915891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.915925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.236 [2024-07-26 23:34:15.915938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:24.236 [2024-07-26 23:34:15.915951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.916451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.916477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.236 [2024-07-26 23:34:15.916490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:26:24.236 [2024-07-26 23:34:15.916502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.916624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.916641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.236 [2024-07-26 23:34:15.916653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:24.236 [2024-07-26 23:34:15.916665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.939457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.939494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.236 [2024-07-26 23:34:15.939509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.805 ms 00:26:24.236 [2024-07-26 23:34:15.939520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.958381] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:24.236 [2024-07-26 23:34:15.958422] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:24.236 [2024-07-26 23:34:15.958438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.958451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:24.236 [2024-07-26 23:34:15.958463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:26:24.236 [2024-07-26 23:34:15.958474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.236 [2024-07-26 23:34:15.987603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.236 [2024-07-26 23:34:15.987646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:24.236 [2024-07-26 23:34:15.987662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.131 ms 00:26:24.236 [2024-07-26 23:34:15.987673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.006000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.006041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:24.495 [2024-07-26 23:34:16.006055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.312 ms 00:26:24.495 [2024-07-26 23:34:16.006066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.022813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.022854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:24.495 [2024-07-26 23:34:16.022868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.733 ms 00:26:24.495 [2024-07-26 23:34:16.022879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.023370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.023400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:24.495 [2024-07-26 23:34:16.023414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:26:24.495 [2024-07-26 23:34:16.023426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.110696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.110743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:24.495 [2024-07-26 23:34:16.110760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.389 ms 00:26:24.495 [2024-07-26 23:34:16.110772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.122121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:24.495 [2024-07-26 23:34:16.124547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.124581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:24.495 [2024-07-26 23:34:16.124595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.753 ms 00:26:24.495 [2024-07-26 23:34:16.124606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.124676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.124692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:24.495 [2024-07-26 23:34:16.124705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:24.495 [2024-07-26 23:34:16.124716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.125946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.126003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:24.495 [2024-07-26 23:34:16.126017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.169 ms 00:26:24.495 [2024-07-26 23:34:16.126029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.128300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.128333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:26:24.495 [2024-07-26 23:34:16.128350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:26:24.495 [2024-07-26 23:34:16.128361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.495 [2024-07-26 23:34:16.128390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.495 [2024-07-26 23:34:16.128403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:24.496 [2024-07-26 23:34:16.128414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:24.496 [2024-07-26 23:34:16.128432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.496 [2024-07-26 23:34:16.128472] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:24.496 [2024-07-26 23:34:16.128501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.496 [2024-07-26 23:34:16.128513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:24.496 [2024-07-26 23:34:16.128525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:24.496 [2024-07-26 23:34:16.128540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.496 [2024-07-26 23:34:16.163879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.496 [2024-07-26 23:34:16.163921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:24.496 [2024-07-26 23:34:16.163936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.367 ms 00:26:24.496 [2024-07-26 23:34:16.163948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.496 [2024-07-26 23:34:16.164025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.496 [2024-07-26 23:34:16.164046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:24.496 [2024-07-26 23:34:16.164058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:24.496 [2024-07-26 23:34:16.164069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.496 [2024-07-26 23:34:16.168732] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.481 ms, result 0 00:26:59.526  Copying: 1316/1048576 [kB] (1316 kBps) Copying: 9476/1048576 [kB] (8160 kBps) Copying: 39/1024 [MB] (29 MBps) Copying: 68/1024 [MB] (29 MBps) Copying: 100/1024 [MB] (31 MBps) Copying: 132/1024 [MB] (31 MBps) Copying: 163/1024 [MB] (31 MBps) Copying: 195/1024 [MB] (31 MBps) Copying: 227/1024 [MB] (31 MBps) Copying: 258/1024 [MB] (31 MBps) Copying: 290/1024 [MB] (31 MBps) Copying: 322/1024 [MB] (31 MBps) Copying: 354/1024 [MB] (32 MBps) Copying: 385/1024 [MB] (31 MBps) Copying: 417/1024 [MB] (31 MBps) Copying: 448/1024 [MB] (31 MBps) Copying: 480/1024 [MB] (31 MBps) Copying: 512/1024 [MB] (31 MBps) Copying: 542/1024 [MB] (30 MBps) Copying: 574/1024 [MB] (31 MBps) Copying: 605/1024 [MB] (31 MBps) Copying: 637/1024 [MB] (31 MBps) Copying: 669/1024 [MB] (31 MBps) Copying: 700/1024 [MB] (31 MBps) Copying: 732/1024 [MB] (31 MBps) Copying: 764/1024 [MB] (31 MBps) Copying: 795/1024 [MB] (31 MBps) Copying: 827/1024 [MB] (31 MBps) Copying: 859/1024 [MB] (31 MBps) Copying: 891/1024 [MB] (31 MBps) Copying: 922/1024 [MB] (31 MBps) Copying: 954/1024 [MB] (31 MBps) Copying: 986/1024 [MB] (31 MBps) Copying: 1018/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-26 23:34:51.130527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.526 [2024-07-26 23:34:51.130634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:59.526 [2024-07-26 23:34:51.130699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.526 [2024-07-26 23:34:51.130726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.526 [2024-07-26 23:34:51.130780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:59.526 [2024-07-26 23:34:51.139315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.526 [2024-07-26 23:34:51.139377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:59.526 [2024-07-26 23:34:51.139399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.508 ms 00:26:59.526 [2024-07-26 23:34:51.139417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.526 [2024-07-26 23:34:51.139783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.526 [2024-07-26 23:34:51.139807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:59.526 [2024-07-26 23:34:51.139833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:26:59.526 [2024-07-26 23:34:51.139850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.526 [2024-07-26 23:34:51.154097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.526 [2024-07-26 23:34:51.154145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:59.527 [2024-07-26 23:34:51.154162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.227 ms 00:26:59.527 [2024-07-26 23:34:51.154175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.527 [2024-07-26 23:34:51.159447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.527 [2024-07-26 23:34:51.159487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:59.527 [2024-07-26 23:34:51.159499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:26:59.527 [2024-07-26 23:34:51.159517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.527 [2024-07-26 23:34:51.195103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.527 [2024-07-26 23:34:51.195142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:59.527 [2024-07-26 23:34:51.195155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.595 ms 00:26:59.527 [2024-07-26 23:34:51.195165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.527 [2024-07-26 23:34:51.216443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.527 [2024-07-26 23:34:51.216481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:59.527 [2024-07-26 23:34:51.216494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.274 ms 00:26:59.527 [2024-07-26 23:34:51.216504] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.527 [2024-07-26 23:34:51.221080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.527 [2024-07-26 23:34:51.221117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:59.527 [2024-07-26 23:34:51.221130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:26:59.527 [2024-07-26 23:34:51.221140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.527 [2024-07-26 23:34:51.256470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.527 [2024-07-26 23:34:51.256509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:59.527 [2024-07-26 23:34:51.256522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.364 ms 00:26:59.527 [2024-07-26 23:34:51.256532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.788 [2024-07-26 23:34:51.293571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.788 [2024-07-26 23:34:51.293607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:59.788 [2024-07-26 23:34:51.293619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.061 ms 00:26:59.788 [2024-07-26 23:34:51.293628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.788 [2024-07-26 23:34:51.328263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.788 [2024-07-26 23:34:51.328297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:59.788 [2024-07-26 23:34:51.328310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.654 ms 00:26:59.788 [2024-07-26 23:34:51.328319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.788 [2024-07-26 23:34:51.363241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.788 [2024-07-26 23:34:51.363276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:59.788 [2024-07-26 23:34:51.363289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.897 ms 00:26:59.788 [2024-07-26 23:34:51.363298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.788 [2024-07-26 23:34:51.363333] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:59.788 [2024-07-26 23:34:51.363349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:59.788 [2024-07-26 23:34:51.363361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:26:59.788 [2024-07-26 23:34:51.363372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.363999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:59.788 [2024-07-26 23:34:51.364079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:59.789 [2024-07-26 23:34:51.364393] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:59.789 [2024-07-26 23:34:51.364403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa1159e5-ef10-4e53-96ba-e9c2ed419c79 00:26:59.789 [2024-07-26 23:34:51.364413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:26:59.789 [2024-07-26 23:34:51.364422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161472 00:26:59.789 [2024-07-26 23:34:51.364431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159488 00:26:59.789 [2024-07-26 23:34:51.364446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:26:59.789 [2024-07-26 23:34:51.364455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:59.789 [2024-07-26 23:34:51.364465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:59.789 [2024-07-26 23:34:51.364473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:59.789 [2024-07-26 23:34:51.364482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:59.789 [2024-07-26 23:34:51.364491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:59.789 [2024-07-26 23:34:51.364500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.789 [2024-07-26 23:34:51.364509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:59.789 [2024-07-26 23:34:51.364519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:26:59.789 [2024-07-26 23:34:51.364528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.383065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.789 [2024-07-26 23:34:51.383097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:59.789 [2024-07-26 23:34:51.383115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.516 ms 00:26:59.789 [2024-07-26 23:34:51.383124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.383365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.789 [2024-07-26 23:34:51.383377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:59.789 [2024-07-26 23:34:51.383387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:26:59.789 [2024-07-26 23:34:51.383398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.434794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:59.789 [2024-07-26 23:34:51.434828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:59.789 [2024-07-26 23:34:51.434840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:59.789 [2024-07-26 23:34:51.434851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.434899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:59.789 [2024-07-26 23:34:51.434909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:59.789 [2024-07-26 23:34:51.434919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:59.789 [2024-07-26 23:34:51.434929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.435013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:59.789 [2024-07-26 23:34:51.435033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:59.789 [2024-07-26 23:34:51.435049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:59.789 [2024-07-26 23:34:51.435058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.789 [2024-07-26 23:34:51.435075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:59.789 [2024-07-26 23:34:51.435085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:59.789 [2024-07-26 23:34:51.435094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:59.789 [2024-07-26 23:34:51.435103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.543665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.543711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:00.049 [2024-07-26 23:34:51.543726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.543736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.049 [2024-07-26 23:34:51.585366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.049 [2024-07-26 23:34:51.585466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.049 [2024-07-26 23:34:51.585539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.049 [2024-07-26 23:34:51.585677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:00.049 [2024-07-26 23:34:51.585743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.049 [2024-07-26 23:34:51.585810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.585866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:00.049 [2024-07-26 23:34:51.585877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.049 [2024-07-26 23:34:51.585887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:00.049 [2024-07-26 23:34:51.585896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.049 [2024-07-26 23:34:51.586040] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.217 ms, result 0 00:27:01.429 00:27:01.429 00:27:01.429 23:34:52 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:02.807 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:02.807 23:34:54 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:03.066 [2024-07-26 23:34:54.574243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:03.066 [2024-07-26 23:34:54.574362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78631 ] 00:27:03.066 [2024-07-26 23:34:54.748614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.325 [2024-07-26 23:34:54.962843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.894 [2024-07-26 23:34:55.350575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:03.894 [2024-07-26 23:34:55.350644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:03.894 [2024-07-26 23:34:55.504940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.505001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:03.894 [2024-07-26 23:34:55.505016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:03.894 [2024-07-26 23:34:55.505026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.505075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.505103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:03.894 [2024-07-26 23:34:55.505113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:03.894 [2024-07-26 23:34:55.505122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.505143] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:03.894 [2024-07-26 23:34:55.506209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:03.894 [2024-07-26 23:34:55.506240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.506251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:03.894 [2024-07-26 23:34:55.506262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:27:03.894 [2024-07-26 23:34:55.506271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.507685] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:03.894 [2024-07-26 23:34:55.526943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.526991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:03.894 [2024-07-26 23:34:55.527010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.291 ms 00:27:03.894 [2024-07-26 23:34:55.527020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.527078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.527105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:03.894 [2024-07-26 23:34:55.527115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:03.894 [2024-07-26 23:34:55.527124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.533915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.533942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:03.894 [2024-07-26 23:34:55.533953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.734 ms 00:27:03.894 [2024-07-26 23:34:55.533972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.534055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.534068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:03.894 [2024-07-26 23:34:55.534078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:03.894 [2024-07-26 23:34:55.534088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.534124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.894 [2024-07-26 23:34:55.534138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:03.894 [2024-07-26 23:34:55.534148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:03.894 [2024-07-26 23:34:55.534157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.894 [2024-07-26 23:34:55.534183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:03.895 [2024-07-26 23:34:55.540189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.895 [2024-07-26 23:34:55.540355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:03.895 [2024-07-26 23:34:55.540564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.023 ms 00:27:03.895 [2024-07-26 23:34:55.540603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.895 [2024-07-26 23:34:55.540661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.895 [2024-07-26 23:34:55.540693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:03.895 [2024-07-26 23:34:55.540724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:03.895 [2024-07-26 23:34:55.540754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.895 [2024-07-26 23:34:55.540825] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:03.895 [2024-07-26 23:34:55.540876] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:27:03.895 [2024-07-26 23:34:55.541085] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:03.895 [2024-07-26 23:34:55.541271] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:27:03.895 [2024-07-26 23:34:55.541529] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:27:03.895 [2024-07-26 23:34:55.541547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:03.895 [2024-07-26 23:34:55.541561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:27:03.895 [2024-07-26 23:34:55.541574] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:03.895 [2024-07-26 23:34:55.541587] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:03.895 [2024-07-26 23:34:55.541604] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:03.895 [2024-07-26 23:34:55.541614] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:03.895 [2024-07-26 23:34:55.541624] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:27:03.895 [2024-07-26 23:34:55.541634] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:27:03.895 [2024-07-26 23:34:55.541646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.895 [2024-07-26 23:34:55.541657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:03.895 [2024-07-26 23:34:55.541667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:27:03.895 [2024-07-26 23:34:55.541678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.895 [2024-07-26 23:34:55.541794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.895 [2024-07-26 23:34:55.541805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:03.895 [2024-07-26 23:34:55.541818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:03.895 [2024-07-26 23:34:55.541829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.895 [2024-07-26 23:34:55.541891] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:03.895 [2024-07-26 23:34:55.541903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:03.895 [2024-07-26 23:34:55.541914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:03.895 [2024-07-26 23:34:55.541925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.895 [2024-07-26 23:34:55.541935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:03.895 [2024-07-26 23:34:55.541944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:03.895 [2024-07-26 23:34:55.541953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:03.895 [2024-07-26 23:34:55.541976] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:03.895 [2024-07-26 23:34:55.541986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:03.895 [2024-07-26 23:34:55.541996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:03.895 [2024-07-26 23:34:55.542005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:03.895 [2024-07-26 23:34:55.542015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:03.895 [2024-07-26 23:34:55.542023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:03.895 [2024-07-26 23:34:55.542033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:03.895 [2024-07-26 23:34:55.542042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:27:03.895 [2024-07-26 23:34:55.542051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:03.895 [2024-07-26 23:34:55.542070] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:27:03.895 [2024-07-26 23:34:55.542079] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:27:03.895 [2024-07-26 23:34:55.542098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:27:03.895 [2024-07-26 23:34:55.542117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:03.895 [2024-07-26 23:34:55.542136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:03.895 [2024-07-26 23:34:55.542163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542181] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:03.895 [2024-07-26 23:34:55.542190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:03.895 [2024-07-26 23:34:55.542217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:03.895 [2024-07-26 23:34:55.542244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:03.895 [2024-07-26 23:34:55.542262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:03.895 [2024-07-26 23:34:55.542271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:27:03.895 [2024-07-26 23:34:55.542280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:03.895 [2024-07-26 23:34:55.542289] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:03.895 [2024-07-26 23:34:55.542299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:03.895 [2024-07-26 23:34:55.542309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:03.895 [2024-07-26 23:34:55.542332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:03.895 [2024-07-26 23:34:55.542342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:03.895 [2024-07-26 23:34:55.542351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:03.895 [2024-07-26 23:34:55.542360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:03.895 [2024-07-26 23:34:55.542369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:03.895 [2024-07-26 23:34:55.542378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:03.895 [2024-07-26 23:34:55.542388] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:03.895 [2024-07-26 23:34:55.542401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.895 [2024-07-26 23:34:55.542412] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:03.895 [2024-07-26 23:34:55.542422] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:27:03.895 [2024-07-26 23:34:55.542433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:27:03.895 [2024-07-26 23:34:55.542444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:27:03.895 [2024-07-26 23:34:55.542455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:27:03.895 [2024-07-26 23:34:55.542465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:27:03.895 [2024-07-26 23:34:55.542475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:27:03.895 [2024-07-26 23:34:55.542485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:27:03.895 [2024-07-26 23:34:55.542496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:27:03.895 [2024-07-26 23:34:55.542506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:27:03.895 [2024-07-26 23:34:55.542516] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:27:03.895 [2024-07-26 23:34:55.542526] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:27:03.896 [2024-07-26 23:34:55.542536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:27:03.896 [2024-07-26 23:34:55.542546] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:03.896 [2024-07-26 23:34:55.542558] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.896 [2024-07-26 23:34:55.542569] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:03.896 [2024-07-26 23:34:55.542580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:03.896 [2024-07-26 23:34:55.542590] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:03.896 [2024-07-26 23:34:55.542601] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:03.896 [2024-07-26 23:34:55.542612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.542623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:03.896 [2024-07-26 23:34:55.542633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:27:03.896 [2024-07-26 23:34:55.542643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.567828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.567992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:03.896 [2024-07-26 23:34:55.568067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:27:03.896 [2024-07-26 23:34:55.568102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.568198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.568252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:03.896 [2024-07-26 23:34:55.568283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:03.896 [2024-07-26 23:34:55.568311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.629157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.629308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:03.896 [2024-07-26 23:34:55.629450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.874 ms 00:27:03.896 [2024-07-26 23:34:55.629495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.629547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.629579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:03.896 [2024-07-26 23:34:55.629610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:03.896 [2024-07-26 23:34:55.629639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.630214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.630323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:03.896 [2024-07-26 23:34:55.630390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:27:03.896 [2024-07-26 23:34:55.630425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:03.896 [2024-07-26 23:34:55.630564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:03.896 [2024-07-26 23:34:55.630639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:03.896 [2024-07-26 23:34:55.630691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:03.896 [2024-07-26 23:34:55.630720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.652513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.652663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.156 [2024-07-26 23:34:55.652840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.786 ms 00:27:04.156 [2024-07-26 23:34:55.652878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.671542] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:04.156 [2024-07-26 23:34:55.671693] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:04.156 [2024-07-26 23:34:55.671712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.671723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:04.156 [2024-07-26 23:34:55.671735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.719 ms 00:27:04.156 [2024-07-26 23:34:55.671745] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.699945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.699994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:04.156 [2024-07-26 23:34:55.700007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.175 ms 00:27:04.156 [2024-07-26 23:34:55.700018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.717265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.717300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:04.156 [2024-07-26 23:34:55.717312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.234 ms 00:27:04.156 [2024-07-26 23:34:55.717321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.734785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.734819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:04.156 [2024-07-26 23:34:55.734832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.455 ms 00:27:04.156 [2024-07-26 23:34:55.734841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.735324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.735342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:04.156 [2024-07-26 23:34:55.735370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:27:04.156 [2024-07-26 23:34:55.735380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.824473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.824515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:04.156 [2024-07-26 23:34:55.824528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.218 ms 00:27:04.156 [2024-07-26 23:34:55.824538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.835838] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:04.156 [2024-07-26 23:34:55.838395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.838422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:04.156 [2024-07-26 23:34:55.838434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.838 ms 00:27:04.156 [2024-07-26 23:34:55.838445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.838511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.838525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:04.156 [2024-07-26 23:34:55.838536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:04.156 [2024-07-26 23:34:55.838547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.839529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.839637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:04.156 [2024-07-26 23:34:55.839725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:27:04.156 [2024-07-26 23:34:55.839761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.156 [2024-07-26 23:34:55.841811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.156 [2024-07-26 23:34:55.841930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:27:04.157 [2024-07-26 23:34:55.842047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:27:04.157 [2024-07-26 23:34:55.842084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.157 [2024-07-26 23:34:55.842138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.157 [2024-07-26 23:34:55.842169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:04.157 [2024-07-26 23:34:55.842198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:04.157 [2024-07-26 23:34:55.842235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.157 [2024-07-26 23:34:55.842292] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:04.157 [2024-07-26 23:34:55.842429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.157 [2024-07-26 23:34:55.842441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:04.157 [2024-07-26 23:34:55.842452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:27:04.157 [2024-07-26 23:34:55.842465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.157 [2024-07-26 23:34:55.878712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.157 [2024-07-26 23:34:55.878844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:04.157 [2024-07-26 23:34:55.878913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.279 ms 00:27:04.157 [2024-07-26 23:34:55.878946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.157 [2024-07-26 23:34:55.879104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.157 [2024-07-26 23:34:55.879152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:04.157 [2024-07-26 23:34:55.879182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:04.157 [2024-07-26 23:34:55.879212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.157 [2024-07-26 23:34:55.880353] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 375.560 ms, result 0 00:27:48.957  Copying: 25/1024 [MB] (25 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 72/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (22 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (22 MBps) Copying: 209/1024 [MB] (22 MBps) Copying: 232/1024 [MB] (22 MBps) Copying: 255/1024 [MB] (23 MBps) Copying: 279/1024 [MB] (23 MBps) Copying: 302/1024 [MB] (23 MBps) Copying: 326/1024 [MB] (23 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 372/1024 [MB] (22 MBps) Copying: 395/1024 [MB] (23 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 443/1024 [MB] (24 MBps) Copying: 467/1024 [MB] (23 MBps) Copying: 490/1024 [MB] (23 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 536/1024 [MB] (22 MBps) Copying: 560/1024 [MB] (23 MBps) Copying: 583/1024 [MB] (23 MBps) Copying: 607/1024 [MB] (23 MBps) Copying: 630/1024 [MB] (23 MBps) Copying: 653/1024 [MB] (23 MBps) Copying: 676/1024 [MB] (22 MBps) Copying: 700/1024 [MB] (23 MBps) Copying: 723/1024 [MB] (23 MBps) Copying: 746/1024 [MB] (23 MBps) Copying: 768/1024 [MB] (22 MBps) Copying: 792/1024 [MB] (23 MBps) Copying: 815/1024 [MB] (23 MBps) Copying: 837/1024 [MB] (22 MBps) Copying: 859/1024 [MB] (22 MBps) Copying: 882/1024 [MB] (22 MBps) Copying: 904/1024 [MB] (22 MBps) Copying: 927/1024 [MB] (22 MBps) Copying: 950/1024 [MB] (22 MBps) Copying: 972/1024 [MB] (22 MBps) Copying: 995/1024 [MB] (22 MBps) Copying: 1017/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 23:35:40.478425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.478520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:48.957 [2024-07-26 23:35:40.478550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:48.957 [2024-07-26 23:35:40.478572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.478618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:48.957 [2024-07-26 23:35:40.484559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.484606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:48.957 [2024-07-26 23:35:40.484624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.916 ms 00:27:48.957 [2024-07-26 23:35:40.484648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.485222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.485245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:48.957 [2024-07-26 23:35:40.485262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:27:48.957 [2024-07-26 23:35:40.485276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.489766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.489800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:48.957 [2024-07-26 23:35:40.489817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:27:48.957 [2024-07-26 23:35:40.489832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.495609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.495652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:27:48.957 [2024-07-26 23:35:40.495665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.750 ms 00:27:48.957 [2024-07-26 23:35:40.495676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.532145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.532184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:48.957 [2024-07-26 23:35:40.532197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.446 ms 00:27:48.957 [2024-07-26 23:35:40.532206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.553611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.957 [2024-07-26 23:35:40.553649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:48.957 [2024-07-26 23:35:40.553662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.399 ms 00:27:48.957 [2024-07-26 23:35:40.553672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.957 [2024-07-26 23:35:40.558317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.958 [2024-07-26 23:35:40.558359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:48.958 [2024-07-26 23:35:40.558372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:27:48.958 [2024-07-26 23:35:40.558382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-07-26 23:35:40.593510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.958 [2024-07-26 23:35:40.593547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:48.958 [2024-07-26 23:35:40.593559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.167 ms 00:27:48.958 [2024-07-26 23:35:40.593569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-07-26 23:35:40.629894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.958 [2024-07-26 23:35:40.629928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:48.958 [2024-07-26 23:35:40.629940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.346 ms 00:27:48.958 [2024-07-26 23:35:40.629949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-07-26 23:35:40.664740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.958 [2024-07-26 23:35:40.664774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:48.958 [2024-07-26 23:35:40.664786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.800 ms 00:27:48.958 [2024-07-26 23:35:40.664795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-07-26 23:35:40.698215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.958 [2024-07-26 23:35:40.698250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:48.958 [2024-07-26 23:35:40.698262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.389 ms 00:27:48.958 [2024-07-26 23:35:40.698271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.958 [2024-07-26 23:35:40.698307] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:48.958 [2024-07-26 23:35:40.698323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:48.958 [2024-07-26 23:35:40.698335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3328 / 261120 wr_cnt: 1 state: open 00:27:48.958 [2024-07-26 23:35:40.698346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.698986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:48.958 [2024-07-26 23:35:40.699065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:48.959 [2024-07-26 23:35:40.699377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:48.959 [2024-07-26 23:35:40.699386] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa1159e5-ef10-4e53-96ba-e9c2ed419c79 00:27:48.959 [2024-07-26 23:35:40.699401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264448 00:27:48.959 [2024-07-26 23:35:40.699411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:48.959 [2024-07-26 23:35:40.699421] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:48.959 [2024-07-26 23:35:40.699432] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:48.959 [2024-07-26 23:35:40.699457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:48.959 [2024-07-26 23:35:40.699467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:48.959 [2024-07-26 23:35:40.699476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:48.959 [2024-07-26 23:35:40.699486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:48.959 [2024-07-26 23:35:40.699495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:48.959 [2024-07-26 23:35:40.699504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.959 [2024-07-26 23:35:40.699515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:48.959 [2024-07-26 23:35:40.699526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:27:48.959 [2024-07-26 23:35:40.699545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.717854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.219 [2024-07-26 23:35:40.717886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:49.219 [2024-07-26 23:35:40.717897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.290 ms 00:27:49.219 [2024-07-26 23:35:40.717907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.718199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.219 [2024-07-26 23:35:40.718212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:49.219 [2024-07-26 23:35:40.718229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:27:49.219 [2024-07-26 23:35:40.718238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.769084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.769118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.219 [2024-07-26 23:35:40.769130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.769140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.769187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.769196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.219 [2024-07-26 23:35:40.769211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.769220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.769286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.769299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.219 [2024-07-26 23:35:40.769309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.769318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.769334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.769343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.219 [2024-07-26 23:35:40.769353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.769367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.877893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.877938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.219 [2024-07-26 23:35:40.877952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.877972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.920832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.920867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.219 [2024-07-26 23:35:40.920879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.920893] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.920961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.920992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.219 [2024-07-26 23:35:40.921003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.921084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.219 [2024-07-26 23:35:40.921095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.921234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.219 [2024-07-26 23:35:40.921246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.921306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:49.219 [2024-07-26 23:35:40.921316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.921378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.219 [2024-07-26 23:35:40.921388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.219 [2024-07-26 23:35:40.921469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.219 [2024-07-26 23:35:40.921480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.219 [2024-07-26 23:35:40.921490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.219 [2024-07-26 23:35:40.921608] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.893 ms, result 0 00:27:50.598 00:27:50.598 00:27:50.598 23:35:42 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:52.502 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:52.502 23:35:43 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:52.502 23:35:43 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:52.502 23:35:43 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:52.502 23:35:43 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:52.502 23:35:43 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:52.502 23:35:44 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:52.502 23:35:44 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:52.502 Process with pid 76672 is not found 00:27:52.502 23:35:44 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76672 00:27:52.502 23:35:44 -- common/autotest_common.sh@926 -- # '[' -z 76672 ']' 00:27:52.502 23:35:44 -- common/autotest_common.sh@930 -- # kill -0 76672 00:27:52.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (76672) - No such process 00:27:52.502 23:35:44 -- common/autotest_common.sh@953 -- # echo 'Process with pid 76672 is not found' 00:27:52.502 23:35:44 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:52.762 23:35:44 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:52.762 Remove shared memory files 00:27:52.762 23:35:44 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:52.762 23:35:44 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:52.762 23:35:44 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:52.762 23:35:44 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:52.762 23:35:44 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:52.762 23:35:44 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:52.762 00:27:52.762 real 3m55.732s 00:27:52.762 user 4m26.063s 00:27:52.762 sys 0m38.684s 00:27:52.762 23:35:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.762 23:35:44 -- common/autotest_common.sh@10 -- # set +x 00:27:52.762 ************************************ 00:27:52.762 END TEST ftl_dirty_shutdown 00:27:52.762 ************************************ 00:27:53.021 23:35:44 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:27:53.021 23:35:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:53.021 23:35:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.021 23:35:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.021 ************************************ 00:27:53.021 START TEST ftl_upgrade_shutdown 00:27:53.021 ************************************ 00:27:53.021 23:35:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:27:53.021 * Looking for test storage... 00:27:53.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:53.021 23:35:44 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:53.021 23:35:44 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:53.021 23:35:44 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:53.021 23:35:44 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:53.021 23:35:44 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:53.021 23:35:44 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:53.021 23:35:44 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:53.021 23:35:44 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:53.021 23:35:44 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:53.021 23:35:44 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:53.021 23:35:44 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:53.021 23:35:44 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:53.021 23:35:44 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:53.021 23:35:44 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:53.021 23:35:44 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:53.021 23:35:44 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:53.021 23:35:44 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:53.021 23:35:44 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:53.021 23:35:44 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:53.021 23:35:44 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:53.021 23:35:44 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:53.021 23:35:44 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:53.021 23:35:44 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:53.021 23:35:44 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:53.021 23:35:44 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:53.021 23:35:44 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:53.021 23:35:44 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.021 23:35:44 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:53.021 23:35:44 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:53.021 23:35:44 -- ftl/common.sh@81 -- # local base_bdev= 00:27:53.021 23:35:44 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.021 23:35:44 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:53.021 23:35:44 -- ftl/common.sh@89 -- # spdk_tgt_pid=79193 00:27:53.021 23:35:44 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:53.021 23:35:44 -- ftl/common.sh@91 -- # waitforlisten 79193 00:27:53.021 23:35:44 -- common/autotest_common.sh@819 -- # '[' -z 79193 ']' 00:27:53.021 23:35:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.021 23:35:44 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:53.021 23:35:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:53.021 23:35:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.022 23:35:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:53.022 23:35:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.281 [2024-07-26 23:35:44.819157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:53.281 [2024-07-26 23:35:44.819456] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79193 ] 00:27:53.281 [2024-07-26 23:35:44.993118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.540 [2024-07-26 23:35:45.213024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:53.540 [2024-07-26 23:35:45.213353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.476 23:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:54.476 23:35:46 -- common/autotest_common.sh@852 -- # return 0 00:27:54.476 23:35:46 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:54.476 23:35:46 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:54.476 23:35:46 -- ftl/common.sh@99 -- # local params 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:54.476 23:35:46 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:54.476 23:35:46 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:54.476 23:35:46 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:27:54.476 23:35:46 -- ftl/common.sh@54 -- # local name=base 00:27:54.735 23:35:46 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:27:54.735 23:35:46 -- ftl/common.sh@56 -- # local size=20480 00:27:54.735 23:35:46 -- ftl/common.sh@59 -- # local base_bdev 00:27:54.736 23:35:46 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:27:54.995 23:35:46 -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:54.995 23:35:46 -- ftl/common.sh@62 -- # local base_size 00:27:54.995 23:35:46 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:54.995 23:35:46 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:27:54.995 23:35:46 -- common/autotest_common.sh@1358 -- # local bdev_info 00:27:54.995 23:35:46 -- common/autotest_common.sh@1359 -- # local bs 00:27:54.995 23:35:46 -- common/autotest_common.sh@1360 -- # local nb 00:27:54.995 23:35:46 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:54.995 23:35:46 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:27:54.995 { 00:27:54.995 "name": "basen1", 00:27:54.995 "aliases": [ 00:27:54.995 "74800edd-c3f3-4dc0-b8a2-b470663cfed0" 00:27:54.995 ], 00:27:54.995 "product_name": "NVMe disk", 00:27:54.995 "block_size": 4096, 00:27:54.995 "num_blocks": 1310720, 00:27:54.995 "uuid": "74800edd-c3f3-4dc0-b8a2-b470663cfed0", 00:27:54.995 "assigned_rate_limits": { 00:27:54.995 "rw_ios_per_sec": 0, 00:27:54.995 "rw_mbytes_per_sec": 0, 00:27:54.995 "r_mbytes_per_sec": 0, 00:27:54.995 "w_mbytes_per_sec": 0 00:27:54.995 }, 00:27:54.995 "claimed": true, 00:27:54.995 "claim_type": "read_many_write_one", 00:27:54.995 "zoned": false, 00:27:54.995 "supported_io_types": { 00:27:54.995 "read": true, 00:27:54.995 "write": true, 00:27:54.995 "unmap": true, 00:27:54.995 "write_zeroes": true, 00:27:54.995 "flush": true, 00:27:54.995 "reset": true, 00:27:54.995 "compare": true, 00:27:54.995 "compare_and_write": false, 00:27:54.995 "abort": true, 00:27:54.995 "nvme_admin": true, 00:27:54.995 "nvme_io": true 00:27:54.995 }, 00:27:54.995 "driver_specific": { 00:27:54.995 "nvme": [ 00:27:54.995 { 00:27:54.995 "pci_address": "0000:00:07.0", 00:27:54.995 "trid": { 00:27:54.995 "trtype": "PCIe", 00:27:54.995 "traddr": "0000:00:07.0" 00:27:54.995 }, 00:27:54.995 "ctrlr_data": { 00:27:54.995 "cntlid": 0, 00:27:54.995 "vendor_id": "0x1b36", 00:27:54.995 "model_number": "QEMU NVMe Ctrl", 00:27:54.995 "serial_number": "12341", 00:27:54.995 "firmware_revision": "8.0.0", 00:27:54.995 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:54.995 "oacs": { 00:27:54.995 "security": 0, 00:27:54.995 "format": 1, 00:27:54.995 "firmware": 0, 00:27:54.995 "ns_manage": 1 00:27:54.995 }, 00:27:54.995 "multi_ctrlr": false, 00:27:54.995 "ana_reporting": false 00:27:54.995 }, 00:27:54.995 "vs": { 00:27:54.995 "nvme_version": "1.4" 00:27:54.995 }, 00:27:54.995 "ns_data": { 00:27:54.995 "id": 1, 00:27:54.995 "can_share": false 00:27:54.995 } 00:27:54.995 } 00:27:54.995 ], 00:27:54.995 "mp_policy": "active_passive" 00:27:54.995 } 00:27:54.995 } 00:27:54.995 ]' 00:27:54.995 23:35:46 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:27:54.995 23:35:46 -- common/autotest_common.sh@1362 -- # bs=4096 00:27:54.995 23:35:46 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:27:55.253 23:35:46 -- common/autotest_common.sh@1363 -- # nb=1310720 00:27:55.253 23:35:46 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:27:55.253 23:35:46 -- common/autotest_common.sh@1367 -- # echo 5120 00:27:55.253 23:35:46 -- ftl/common.sh@63 -- # base_size=5120 00:27:55.253 23:35:46 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:55.253 23:35:46 -- ftl/common.sh@67 -- # clear_lvols 00:27:55.253 23:35:46 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:55.253 23:35:46 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:55.254 23:35:46 -- ftl/common.sh@28 -- # stores=07aba7e3-9de6-4567-9306-2f0508719c59 00:27:55.254 23:35:46 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:55.254 23:35:46 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07aba7e3-9de6-4567-9306-2f0508719c59 00:27:55.512 23:35:47 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:55.771 23:35:47 -- ftl/common.sh@68 -- # lvs=b2a262e7-6a31-426a-8a37-9986c4ad78b1 00:27:55.771 23:35:47 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b2a262e7-6a31-426a-8a37-9986c4ad78b1 00:27:55.771 23:35:47 -- ftl/common.sh@107 -- # base_bdev=e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 00:27:55.771 23:35:47 -- ftl/common.sh@108 -- # [[ -z e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f ]] 00:27:55.771 23:35:47 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 5120 00:27:55.771 23:35:47 -- ftl/common.sh@35 -- # local name=cache 00:27:55.771 23:35:47 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:27:55.771 23:35:47 -- ftl/common.sh@37 -- # local base_bdev=e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 00:27:55.771 23:35:47 -- ftl/common.sh@38 -- # local cache_size=5120 00:27:55.771 23:35:47 -- ftl/common.sh@41 -- # get_bdev_size e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 00:27:55.771 23:35:47 -- common/autotest_common.sh@1357 -- # local bdev_name=e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 00:27:55.771 23:35:47 -- common/autotest_common.sh@1358 -- # local bdev_info 00:27:55.771 23:35:47 -- common/autotest_common.sh@1359 -- # local bs 00:27:55.771 23:35:47 -- common/autotest_common.sh@1360 -- # local nb 00:27:55.771 23:35:47 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f 00:27:56.031 23:35:47 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:27:56.031 { 00:27:56.031 "name": "e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f", 00:27:56.031 "aliases": [ 00:27:56.031 "lvs/basen1p0" 00:27:56.031 ], 00:27:56.031 "product_name": "Logical Volume", 00:27:56.031 "block_size": 4096, 00:27:56.031 "num_blocks": 5242880, 00:27:56.031 "uuid": "e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f", 00:27:56.031 "assigned_rate_limits": { 00:27:56.031 "rw_ios_per_sec": 0, 00:27:56.031 "rw_mbytes_per_sec": 0, 00:27:56.031 "r_mbytes_per_sec": 0, 00:27:56.031 "w_mbytes_per_sec": 0 00:27:56.031 }, 00:27:56.031 "claimed": false, 00:27:56.031 "zoned": false, 00:27:56.031 "supported_io_types": { 00:27:56.031 "read": true, 00:27:56.031 "write": true, 00:27:56.031 "unmap": true, 00:27:56.031 "write_zeroes": true, 00:27:56.031 "flush": false, 00:27:56.031 "reset": true, 00:27:56.031 "compare": false, 00:27:56.031 "compare_and_write": false, 00:27:56.031 "abort": false, 00:27:56.031 "nvme_admin": false, 00:27:56.031 "nvme_io": false 00:27:56.031 }, 00:27:56.031 "driver_specific": { 00:27:56.031 "lvol": { 00:27:56.031 "lvol_store_uuid": "b2a262e7-6a31-426a-8a37-9986c4ad78b1", 00:27:56.031 "base_bdev": "basen1", 00:27:56.031 "thin_provision": true, 00:27:56.031 "snapshot": false, 00:27:56.031 "clone": false, 00:27:56.031 "esnap_clone": false 00:27:56.031 } 00:27:56.031 } 00:27:56.031 } 00:27:56.031 ]' 00:27:56.031 23:35:47 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:27:56.031 23:35:47 -- common/autotest_common.sh@1362 -- # bs=4096 00:27:56.031 23:35:47 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:27:56.290 23:35:47 -- common/autotest_common.sh@1363 -- # nb=5242880 00:27:56.290 23:35:47 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:27:56.290 23:35:47 -- common/autotest_common.sh@1367 -- # echo 20480 00:27:56.290 23:35:47 -- ftl/common.sh@41 -- # local base_size=1024 00:27:56.290 23:35:47 -- ftl/common.sh@44 -- # local nvc_bdev 00:27:56.290 23:35:47 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:27:56.290 23:35:48 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:56.290 23:35:48 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:56.290 23:35:48 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:56.549 23:35:48 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:56.549 23:35:48 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:56.549 23:35:48 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e3b2edeb-d8e3-4b70-b7c3-eecce08ffe2f -c cachen1p0 --l2p_dram_limit 2 00:27:56.810 [2024-07-26 23:35:48.361262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.361306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:56.810 [2024-07-26 23:35:48.361324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:56.810 [2024-07-26 23:35:48.361334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.361388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.361399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:56.810 [2024-07-26 23:35:48.361411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:56.810 [2024-07-26 23:35:48.361420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.361443] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:56.810 [2024-07-26 23:35:48.362612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:56.810 [2024-07-26 23:35:48.362651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.362662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:56.810 [2024-07-26 23:35:48.362676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.209 ms 00:27:56.810 [2024-07-26 23:35:48.362686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.362832] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c61e8e01-b804-4543-b8c2-7c525c37f0d0 00:27:56.810 [2024-07-26 23:35:48.364268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.364297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:56.810 [2024-07-26 23:35:48.364308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:56.810 [2024-07-26 23:35:48.364321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.371790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.371826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:56.810 [2024-07-26 23:35:48.371837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.432 ms 00:27:56.810 [2024-07-26 23:35:48.371850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.371898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.371912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:56.810 [2024-07-26 23:35:48.371923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:56.810 [2024-07-26 23:35:48.371938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.372204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.372261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:56.810 [2024-07-26 23:35:48.372276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:56.810 [2024-07-26 23:35:48.372292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.372324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:56.810 [2024-07-26 23:35:48.378400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.378435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:56.810 [2024-07-26 23:35:48.378449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.095 ms 00:27:56.810 [2024-07-26 23:35:48.378459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.378491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.378501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:56.810 [2024-07-26 23:35:48.378514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:56.810 [2024-07-26 23:35:48.378524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.378566] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:56.810 [2024-07-26 23:35:48.378673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:56.810 [2024-07-26 23:35:48.378693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:56.810 [2024-07-26 23:35:48.378706] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:56.810 [2024-07-26 23:35:48.378721] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:56.810 [2024-07-26 23:35:48.378733] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:56.810 [2024-07-26 23:35:48.378746] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:56.810 [2024-07-26 23:35:48.378756] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:56.810 [2024-07-26 23:35:48.378767] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:56.810 [2024-07-26 23:35:48.378780] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:56.810 [2024-07-26 23:35:48.378793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.378803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:56.810 [2024-07-26 23:35:48.378816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.229 ms 00:27:56.810 [2024-07-26 23:35:48.378825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.378886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.810 [2024-07-26 23:35:48.378896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:56.810 [2024-07-26 23:35:48.378919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:56.810 [2024-07-26 23:35:48.378929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.810 [2024-07-26 23:35:48.379022] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:56.810 [2024-07-26 23:35:48.379036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:56.810 [2024-07-26 23:35:48.379049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:56.810 [2024-07-26 23:35:48.379059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:56.810 [2024-07-26 23:35:48.379079] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:56.810 [2024-07-26 23:35:48.379100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:56.810 [2024-07-26 23:35:48.379111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:56.810 [2024-07-26 23:35:48.379119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:56.810 [2024-07-26 23:35:48.379140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:56.810 [2024-07-26 23:35:48.379154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379164] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:56.810 [2024-07-26 23:35:48.379174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:56.810 [2024-07-26 23:35:48.379204] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:56.810 [2024-07-26 23:35:48.379215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.810 [2024-07-26 23:35:48.379223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:56.810 [2024-07-26 23:35:48.379234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:56.810 [2024-07-26 23:35:48.379243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:56.810 [2024-07-26 23:35:48.379254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:56.810 [2024-07-26 23:35:48.379264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:56.810 [2024-07-26 23:35:48.379274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:56.810 [2024-07-26 23:35:48.379283] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:56.810 [2024-07-26 23:35:48.379294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:56.810 [2024-07-26 23:35:48.379303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:56.810 [2024-07-26 23:35:48.379314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:56.810 [2024-07-26 23:35:48.379322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:56.810 [2024-07-26 23:35:48.379333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:56.810 [2024-07-26 23:35:48.379341] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:56.810 [2024-07-26 23:35:48.379354] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:56.810 [2024-07-26 23:35:48.379363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:56.811 [2024-07-26 23:35:48.379374] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:56.811 [2024-07-26 23:35:48.379383] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:56.811 [2024-07-26 23:35:48.379394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.811 [2024-07-26 23:35:48.379402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:56.811 [2024-07-26 23:35:48.379414] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:56.811 [2024-07-26 23:35:48.379423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.811 [2024-07-26 23:35:48.379433] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:56.811 [2024-07-26 23:35:48.379443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:56.811 [2024-07-26 23:35:48.379455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:56.811 [2024-07-26 23:35:48.379464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:56.811 [2024-07-26 23:35:48.379476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:56.811 [2024-07-26 23:35:48.379486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:56.811 [2024-07-26 23:35:48.379497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:56.811 [2024-07-26 23:35:48.379506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:56.811 [2024-07-26 23:35:48.379519] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:56.811 [2024-07-26 23:35:48.379528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:56.811 [2024-07-26 23:35:48.379540] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:56.811 [2024-07-26 23:35:48.379552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:56.811 [2024-07-26 23:35:48.379578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:56.811 [2024-07-26 23:35:48.379613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:56.811 [2024-07-26 23:35:48.379623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:56.811 [2024-07-26 23:35:48.379635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:56.811 [2024-07-26 23:35:48.379644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379657] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379687] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:56.811 [2024-07-26 23:35:48.379702] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:56.811 [2024-07-26 23:35:48.379712] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:56.811 [2024-07-26 23:35:48.379724] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379734] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.811 [2024-07-26 23:35:48.379746] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:56.811 [2024-07-26 23:35:48.379755] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:56.811 [2024-07-26 23:35:48.379768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:56.811 [2024-07-26 23:35:48.379778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.379790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:56.811 [2024-07-26 23:35:48.379799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.821 ms 00:27:56.811 [2024-07-26 23:35:48.379811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.402729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.402764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:56.811 [2024-07-26 23:35:48.402778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.917 ms 00:27:56.811 [2024-07-26 23:35:48.402790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.402826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.402840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:56.811 [2024-07-26 23:35:48.402850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:56.811 [2024-07-26 23:35:48.402861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.451820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.451860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:56.811 [2024-07-26 23:35:48.451874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 48.992 ms 00:27:56.811 [2024-07-26 23:35:48.451886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.451914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.451930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:56.811 [2024-07-26 23:35:48.451940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:56.811 [2024-07-26 23:35:48.451951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.452441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.452462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:56.811 [2024-07-26 23:35:48.452489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:27:56.811 [2024-07-26 23:35:48.452502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.452539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.452566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:56.811 [2024-07-26 23:35:48.452577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:56.811 [2024-07-26 23:35:48.452590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.474939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.474983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:56.811 [2024-07-26 23:35:48.474997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.366 ms 00:27:56.811 [2024-07-26 23:35:48.475009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.487511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:56.811 [2024-07-26 23:35:48.488571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.488602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:56.811 [2024-07-26 23:35:48.488617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.504 ms 00:27:56.811 [2024-07-26 23:35:48.488628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.523704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.811 [2024-07-26 23:35:48.523742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:56.811 [2024-07-26 23:35:48.523758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.102 ms 00:27:56.811 [2024-07-26 23:35:48.523768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.811 [2024-07-26 23:35:48.523813] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:27:56.811 [2024-07-26 23:35:48.523827] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:28:01.007 [2024-07-26 23:35:52.563720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.563776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:01.008 [2024-07-26 23:35:52.563795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4046.462 ms 00:28:01.008 [2024-07-26 23:35:52.563806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.008 [2024-07-26 23:35:52.563914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.563926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:01.008 [2024-07-26 23:35:52.563951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:28:01.008 [2024-07-26 23:35:52.563961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.008 [2024-07-26 23:35:52.599206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.599241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:01.008 [2024-07-26 23:35:52.599257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.227 ms 00:28:01.008 [2024-07-26 23:35:52.599267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.008 [2024-07-26 23:35:52.634051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.634084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:01.008 [2024-07-26 23:35:52.634102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 34.797 ms 00:28:01.008 [2024-07-26 23:35:52.634111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.008 [2024-07-26 23:35:52.634524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.634539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:01.008 [2024-07-26 23:35:52.634552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:28:01.008 [2024-07-26 23:35:52.634562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.008 [2024-07-26 23:35:52.727254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.008 [2024-07-26 23:35:52.727288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:01.008 [2024-07-26 23:35:52.727304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 92.792 ms 00:28:01.008 [2024-07-26 23:35:52.727314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.764425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.267 [2024-07-26 23:35:52.764460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:01.267 [2024-07-26 23:35:52.764475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 37.127 ms 00:28:01.267 [2024-07-26 23:35:52.764488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.766656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.267 [2024-07-26 23:35:52.766685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:01.267 [2024-07-26 23:35:52.766703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.130 ms 00:28:01.267 [2024-07-26 23:35:52.766713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.802493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.267 [2024-07-26 23:35:52.802526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:01.267 [2024-07-26 23:35:52.802542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.773 ms 00:28:01.267 [2024-07-26 23:35:52.802551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.802596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.267 [2024-07-26 23:35:52.802606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:01.267 [2024-07-26 23:35:52.802619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:01.267 [2024-07-26 23:35:52.802628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.802722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:01.267 [2024-07-26 23:35:52.802735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:01.267 [2024-07-26 23:35:52.802751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:01.267 [2024-07-26 23:35:52.802760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:01.267 [2024-07-26 23:35:52.803764] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4449.284 ms, result 0 00:28:01.267 { 00:28:01.267 "name": "ftl", 00:28:01.267 "uuid": "c61e8e01-b804-4543-b8c2-7c525c37f0d0" 00:28:01.267 } 00:28:01.267 23:35:52 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:01.267 [2024-07-26 23:35:52.994628] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.528 23:35:53 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:01.528 23:35:53 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:01.812 [2024-07-26 23:35:53.314378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:28:01.812 23:35:53 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:01.812 [2024-07-26 23:35:53.491849] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:01.812 23:35:53 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:02.114 Fill FTL, iteration 1 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:02.114 23:35:53 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:02.114 23:35:53 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:02.114 23:35:53 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:02.114 23:35:53 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:02.114 23:35:53 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:02.114 23:35:53 -- ftl/common.sh@163 -- # spdk_ini_pid=79323 00:28:02.114 23:35:53 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:02.114 23:35:53 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:02.114 23:35:53 -- ftl/common.sh@165 -- # waitforlisten 79323 /var/tmp/spdk.tgt.sock 00:28:02.114 23:35:53 -- common/autotest_common.sh@819 -- # '[' -z 79323 ']' 00:28:02.114 23:35:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:02.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:02.114 23:35:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:02.114 23:35:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:02.114 23:35:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:02.114 23:35:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.373 [2024-07-26 23:35:53.906704] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:02.373 [2024-07-26 23:35:53.906805] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79323 ] 00:28:02.373 [2024-07-26 23:35:54.073801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.632 [2024-07-26 23:35:54.282789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:02.632 [2024-07-26 23:35:54.282995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.011 23:35:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:04.011 23:35:55 -- common/autotest_common.sh@852 -- # return 0 00:28:04.011 23:35:55 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:04.011 ftln1 00:28:04.011 23:35:55 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:04.011 23:35:55 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:04.270 23:35:55 -- ftl/common.sh@173 -- # echo ']}' 00:28:04.270 23:35:55 -- ftl/common.sh@176 -- # killprocess 79323 00:28:04.270 23:35:55 -- common/autotest_common.sh@926 -- # '[' -z 79323 ']' 00:28:04.270 23:35:55 -- common/autotest_common.sh@930 -- # kill -0 79323 00:28:04.270 23:35:55 -- common/autotest_common.sh@931 -- # uname 00:28:04.270 23:35:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:04.270 23:35:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79323 00:28:04.270 killing process with pid 79323 00:28:04.270 23:35:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:04.270 23:35:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:04.270 23:35:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79323' 00:28:04.270 23:35:55 -- common/autotest_common.sh@945 -- # kill 79323 00:28:04.270 23:35:55 -- common/autotest_common.sh@950 -- # wait 79323 00:28:06.806 23:35:58 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:06.806 23:35:58 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:06.806 [2024-07-26 23:35:58.135019] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:06.806 [2024-07-26 23:35:58.135148] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79381 ] 00:28:06.806 [2024-07-26 23:35:58.305042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.806 [2024-07-26 23:35:58.517164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.595  Copying: 258/1024 [MB] (258 MBps) Copying: 516/1024 [MB] (258 MBps) Copying: 776/1024 [MB] (260 MBps) Copying: 1024/1024 [MB] (average 257 MBps) 00:28:12.595 00:28:12.595 Calculate MD5 checksum, iteration 1 00:28:12.595 23:36:04 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:12.595 23:36:04 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:12.595 23:36:04 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:12.595 23:36:04 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.595 23:36:04 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.595 23:36:04 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.595 23:36:04 -- ftl/common.sh@154 -- # return 0 00:28:12.595 23:36:04 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:12.595 [2024-07-26 23:36:04.265649] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:12.595 [2024-07-26 23:36:04.266001] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79443 ] 00:28:12.854 [2024-07-26 23:36:04.439574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.113 [2024-07-26 23:36:04.648168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.431  Copying: 617/1024 [MB] (617 MBps) Copying: 1024/1024 [MB] (average 616 MBps) 00:28:16.431 00:28:16.431 23:36:07 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:16.431 23:36:07 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=940f8e42f34ad157fed9c927c2a3c92a 00:28:18.335 Fill FTL, iteration 2 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:18.335 23:36:09 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:18.335 23:36:09 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:18.335 23:36:09 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:18.335 23:36:09 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:18.335 23:36:09 -- ftl/common.sh@154 -- # return 0 00:28:18.335 23:36:09 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:18.335 [2024-07-26 23:36:09.735296] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:18.335 [2024-07-26 23:36:09.735576] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79511 ] 00:28:18.335 [2024-07-26 23:36:09.905921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.594 [2024-07-26 23:36:10.112398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.598  Copying: 253/1024 [MB] (253 MBps) Copying: 492/1024 [MB] (239 MBps) Copying: 730/1024 [MB] (238 MBps) Copying: 971/1024 [MB] (241 MBps) Copying: 1024/1024 [MB] (average 242 MBps) 00:28:24.598 00:28:24.598 23:36:16 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:24.598 Calculate MD5 checksum, iteration 2 00:28:24.598 23:36:16 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:24.598 23:36:16 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:24.598 23:36:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:24.598 23:36:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:24.598 23:36:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:24.598 23:36:16 -- ftl/common.sh@154 -- # return 0 00:28:24.598 23:36:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:24.598 [2024-07-26 23:36:16.101163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.598 [2024-07-26 23:36:16.101606] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79575 ] 00:28:24.598 [2024-07-26 23:36:16.266418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.856 [2024-07-26 23:36:16.472603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.235  Copying: 605/1024 [MB] (605 MBps) Copying: 1024/1024 [MB] (average 603 MBps) 00:28:29.235 00:28:29.235 23:36:20 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:29.235 23:36:20 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:30.646 23:36:22 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:30.646 23:36:22 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=edac69690ee97a74d157e9e3c7dc6984 00:28:30.646 23:36:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:30.646 23:36:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:30.646 23:36:22 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:30.905 [2024-07-26 23:36:22.412713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.905 [2024-07-26 23:36:22.412763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:30.905 [2024-07-26 23:36:22.412779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:30.905 [2024-07-26 23:36:22.412790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.905 [2024-07-26 23:36:22.412816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.905 [2024-07-26 23:36:22.412826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:30.905 [2024-07-26 23:36:22.412836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:30.905 [2024-07-26 23:36:22.412845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.905 [2024-07-26 23:36:22.412868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:30.905 [2024-07-26 23:36:22.412878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:30.905 [2024-07-26 23:36:22.412888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:30.905 [2024-07-26 23:36:22.412897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:30.905 [2024-07-26 23:36:22.412956] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.239 ms, result 0 00:28:30.905 true 00:28:30.905 23:36:22 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:30.905 { 00:28:30.905 "name": "ftl", 00:28:30.905 "properties": [ 00:28:30.905 { 00:28:30.905 "name": "superblock_version", 00:28:30.905 "value": 5, 00:28:30.905 "read-only": true 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "name": "base_device", 00:28:30.905 "bands": [ 00:28:30.905 { 00:28:30.905 "id": 0, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 1, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 2, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 3, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 4, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 5, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 6, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 7, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.905 "id": 8, 00:28:30.905 "state": "FREE", 00:28:30.905 "validity": 0.0 00:28:30.905 }, 00:28:30.905 { 00:28:30.906 "id": 9, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 10, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 11, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 12, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 13, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 14, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 15, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 16, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 17, 00:28:30.906 "state": "FREE", 00:28:30.906 "validity": 0.0 00:28:30.906 } 00:28:30.906 ], 00:28:30.906 "read-only": true 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "name": "cache_device", 00:28:30.906 "type": "bdev", 00:28:30.906 "chunks": [ 00:28:30.906 { 00:28:30.906 "id": 0, 00:28:30.906 "state": "CLOSED", 00:28:30.906 "utilization": 1.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 1, 00:28:30.906 "state": "CLOSED", 00:28:30.906 "utilization": 1.0 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 2, 00:28:30.906 "state": "OPEN", 00:28:30.906 "utilization": 0.001953125 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "id": 3, 00:28:30.906 "state": "OPEN", 00:28:30.906 "utilization": 0.0 00:28:30.906 } 00:28:30.906 ], 00:28:30.906 "read-only": true 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "name": "verbose_mode", 00:28:30.906 "value": true, 00:28:30.906 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:30.906 }, 00:28:30.906 { 00:28:30.906 "name": "prep_upgrade_on_shutdown", 00:28:30.906 "value": false, 00:28:30.906 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:30.906 } 00:28:30.906 ] 00:28:30.906 } 00:28:30.906 23:36:22 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:31.165 [2024-07-26 23:36:22.761115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.165 [2024-07-26 23:36:22.761155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:31.165 [2024-07-26 23:36:22.761168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:31.165 [2024-07-26 23:36:22.761178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.165 [2024-07-26 23:36:22.761201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.165 [2024-07-26 23:36:22.761211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:31.165 [2024-07-26 23:36:22.761221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:31.165 [2024-07-26 23:36:22.761230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.165 [2024-07-26 23:36:22.761259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.165 [2024-07-26 23:36:22.761268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:31.165 [2024-07-26 23:36:22.761278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:31.165 [2024-07-26 23:36:22.761287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.165 [2024-07-26 23:36:22.761346] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:28:31.165 true 00:28:31.165 23:36:22 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:31.165 23:36:22 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:31.165 23:36:22 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:31.425 23:36:22 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:31.425 23:36:22 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:31.425 23:36:22 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:31.425 [2024-07-26 23:36:23.138104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.425 [2024-07-26 23:36:23.138148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:31.425 [2024-07-26 23:36:23.138162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:31.425 [2024-07-26 23:36:23.138172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.425 [2024-07-26 23:36:23.138195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.425 [2024-07-26 23:36:23.138205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:31.425 [2024-07-26 23:36:23.138214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:31.425 [2024-07-26 23:36:23.138223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.425 [2024-07-26 23:36:23.138242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:31.425 [2024-07-26 23:36:23.138251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:31.425 [2024-07-26 23:36:23.138261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:31.425 [2024-07-26 23:36:23.138270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:31.425 [2024-07-26 23:36:23.138320] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.207 ms, result 0 00:28:31.425 true 00:28:31.425 23:36:23 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:31.686 { 00:28:31.686 "name": "ftl", 00:28:31.686 "properties": [ 00:28:31.686 { 00:28:31.686 "name": "superblock_version", 00:28:31.686 "value": 5, 00:28:31.686 "read-only": true 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "name": "base_device", 00:28:31.686 "bands": [ 00:28:31.686 { 00:28:31.686 "id": 0, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 1, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 2, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 3, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 4, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 5, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 6, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 7, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 8, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 9, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 10, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 11, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 12, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 13, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 14, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 15, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 16, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 17, 00:28:31.686 "state": "FREE", 00:28:31.686 "validity": 0.0 00:28:31.686 } 00:28:31.686 ], 00:28:31.686 "read-only": true 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "name": "cache_device", 00:28:31.686 "type": "bdev", 00:28:31.686 "chunks": [ 00:28:31.686 { 00:28:31.686 "id": 0, 00:28:31.686 "state": "CLOSED", 00:28:31.686 "utilization": 1.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 1, 00:28:31.686 "state": "CLOSED", 00:28:31.686 "utilization": 1.0 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 2, 00:28:31.686 "state": "OPEN", 00:28:31.686 "utilization": 0.001953125 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "id": 3, 00:28:31.686 "state": "OPEN", 00:28:31.686 "utilization": 0.0 00:28:31.686 } 00:28:31.686 ], 00:28:31.686 "read-only": true 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "name": "verbose_mode", 00:28:31.686 "value": true, 00:28:31.686 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:31.686 }, 00:28:31.686 { 00:28:31.686 "name": "prep_upgrade_on_shutdown", 00:28:31.686 "value": true, 00:28:31.686 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:31.686 } 00:28:31.686 ] 00:28:31.686 } 00:28:31.686 23:36:23 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:31.686 23:36:23 -- ftl/common.sh@130 -- # [[ -n 79193 ]] 00:28:31.686 23:36:23 -- ftl/common.sh@131 -- # killprocess 79193 00:28:31.686 23:36:23 -- common/autotest_common.sh@926 -- # '[' -z 79193 ']' 00:28:31.686 23:36:23 -- common/autotest_common.sh@930 -- # kill -0 79193 00:28:31.686 23:36:23 -- common/autotest_common.sh@931 -- # uname 00:28:31.686 23:36:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:31.686 23:36:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79193 00:28:31.686 killing process with pid 79193 00:28:31.686 23:36:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:31.686 23:36:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:31.686 23:36:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79193' 00:28:31.686 23:36:23 -- common/autotest_common.sh@945 -- # kill 79193 00:28:31.686 23:36:23 -- common/autotest_common.sh@950 -- # wait 79193 00:28:33.062 [2024-07-26 23:36:24.397359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:33.062 [2024-07-26 23:36:24.415384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.062 [2024-07-26 23:36:24.415421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:33.062 [2024-07-26 23:36:24.415437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:33.062 [2024-07-26 23:36:24.415446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:33.062 [2024-07-26 23:36:24.415470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:33.062 [2024-07-26 23:36:24.418900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:33.062 [2024-07-26 23:36:24.418932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:33.062 [2024-07-26 23:36:24.418943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.420 ms 00:28:33.062 [2024-07-26 23:36:24.418953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.516332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.516385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:41.184 [2024-07-26 23:36:31.516402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7108.861 ms 00:28:41.184 [2024-07-26 23:36:31.516412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.517601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.517632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:41.184 [2024-07-26 23:36:31.517644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.173 ms 00:28:41.184 [2024-07-26 23:36:31.517654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.518590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.518607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:41.184 [2024-07-26 23:36:31.518619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:28:41.184 [2024-07-26 23:36:31.518629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.533574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.533610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:41.184 [2024-07-26 23:36:31.533623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.915 ms 00:28:41.184 [2024-07-26 23:36:31.533632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.543168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.543202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:41.184 [2024-07-26 23:36:31.543220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.517 ms 00:28:41.184 [2024-07-26 23:36:31.543230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.543303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.543315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:41.184 [2024-07-26 23:36:31.543325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:41.184 [2024-07-26 23:36:31.543335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.557686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.557718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:41.184 [2024-07-26 23:36:31.557730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.358 ms 00:28:41.184 [2024-07-26 23:36:31.557739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.571920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.571951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:41.184 [2024-07-26 23:36:31.572098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.171 ms 00:28:41.184 [2024-07-26 23:36:31.572138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.586263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.586296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:41.184 [2024-07-26 23:36:31.586308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.056 ms 00:28:41.184 [2024-07-26 23:36:31.586318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.600771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.184 [2024-07-26 23:36:31.600802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:41.184 [2024-07-26 23:36:31.600815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.416 ms 00:28:41.184 [2024-07-26 23:36:31.600824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.184 [2024-07-26 23:36:31.600855] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:41.184 [2024-07-26 23:36:31.600870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:41.185 [2024-07-26 23:36:31.600882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:41.185 [2024-07-26 23:36:31.600892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:41.185 [2024-07-26 23:36:31.600902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.600997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:41.185 [2024-07-26 23:36:31.601068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:41.185 [2024-07-26 23:36:31.601077] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c61e8e01-b804-4543-b8c2-7c525c37f0d0 00:28:41.185 [2024-07-26 23:36:31.601103] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:41.185 [2024-07-26 23:36:31.601113] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:41.185 [2024-07-26 23:36:31.601122] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:41.185 [2024-07-26 23:36:31.601131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:41.185 [2024-07-26 23:36:31.601141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:41.185 [2024-07-26 23:36:31.601150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:41.185 [2024-07-26 23:36:31.601161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:41.185 [2024-07-26 23:36:31.601170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:41.185 [2024-07-26 23:36:31.601179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:41.185 [2024-07-26 23:36:31.601189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.185 [2024-07-26 23:36:31.601200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:41.185 [2024-07-26 23:36:31.601211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:28:41.185 [2024-07-26 23:36:31.601220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.618753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.185 [2024-07-26 23:36:31.618786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:41.185 [2024-07-26 23:36:31.618798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.544 ms 00:28:41.185 [2024-07-26 23:36:31.618808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.619058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.185 [2024-07-26 23:36:31.619071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:41.185 [2024-07-26 23:36:31.619099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.230 ms 00:28:41.185 [2024-07-26 23:36:31.619113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.679199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.679233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:41.185 [2024-07-26 23:36:31.679245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.679255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.679283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.679293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:41.185 [2024-07-26 23:36:31.679304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.679322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.679385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.679397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:41.185 [2024-07-26 23:36:31.679407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.679416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.679433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.679442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:41.185 [2024-07-26 23:36:31.679452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.679461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.785210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.785252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:41.185 [2024-07-26 23:36:31.785265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.785275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.826867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.826903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:41.185 [2024-07-26 23:36:31.826915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.826924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:41.185 [2024-07-26 23:36:31.827030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:41.185 [2024-07-26 23:36:31.827105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:41.185 [2024-07-26 23:36:31.827260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:41.185 [2024-07-26 23:36:31.827326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:41.185 [2024-07-26 23:36:31.827399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:41.185 [2024-07-26 23:36:31.827465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:41.185 [2024-07-26 23:36:31.827476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:41.185 [2024-07-26 23:36:31.827487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.185 [2024-07-26 23:36:31.827621] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7424.237 ms, result 0 00:28:43.721 23:36:35 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:43.721 23:36:35 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:43.721 23:36:35 -- ftl/common.sh@81 -- # local base_bdev= 00:28:43.721 23:36:35 -- ftl/common.sh@82 -- # local cache_bdev= 00:28:43.721 23:36:35 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:43.721 23:36:35 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:43.721 23:36:35 -- ftl/common.sh@89 -- # spdk_tgt_pid=79781 00:28:43.721 23:36:35 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:43.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.721 23:36:35 -- ftl/common.sh@91 -- # waitforlisten 79781 00:28:43.721 23:36:35 -- common/autotest_common.sh@819 -- # '[' -z 79781 ']' 00:28:43.721 23:36:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.722 23:36:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.722 23:36:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.722 23:36:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.722 23:36:35 -- common/autotest_common.sh@10 -- # set +x 00:28:43.722 [2024-07-26 23:36:35.176129] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:43.722 [2024-07-26 23:36:35.176244] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79781 ] 00:28:43.722 [2024-07-26 23:36:35.348077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.981 [2024-07-26 23:36:35.553676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:43.981 [2024-07-26 23:36:35.553859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.920 [2024-07-26 23:36:36.528064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:44.920 [2024-07-26 23:36:36.528125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:44.920 [2024-07-26 23:36:36.671053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.920 [2024-07-26 23:36:36.671097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:44.920 [2024-07-26 23:36:36.671111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:44.920 [2024-07-26 23:36:36.671121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.920 [2024-07-26 23:36:36.671171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.920 [2024-07-26 23:36:36.671190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:44.920 [2024-07-26 23:36:36.671200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:44.920 [2024-07-26 23:36:36.671212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.920 [2024-07-26 23:36:36.671234] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:44.920 [2024-07-26 23:36:36.672346] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:44.920 [2024-07-26 23:36:36.672375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.920 [2024-07-26 23:36:36.672388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:44.920 [2024-07-26 23:36:36.672398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.146 ms 00:28:44.920 [2024-07-26 23:36:36.672408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.673789] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:45.181 [2024-07-26 23:36:36.693275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.693426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:45.181 [2024-07-26 23:36:36.693506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.518 ms 00:28:45.181 [2024-07-26 23:36:36.693542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.693667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.693712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:45.181 [2024-07-26 23:36:36.693747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:45.181 [2024-07-26 23:36:36.693777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.700674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.700814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:45.181 [2024-07-26 23:36:36.700896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.808 ms 00:28:45.181 [2024-07-26 23:36:36.700933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.701016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.701054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:45.181 [2024-07-26 23:36:36.701085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:45.181 [2024-07-26 23:36:36.701116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.701179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.701213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:45.181 [2024-07-26 23:36:36.701244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:45.181 [2024-07-26 23:36:36.701478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.701547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:45.181 [2024-07-26 23:36:36.707344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.707495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:45.181 [2024-07-26 23:36:36.707575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.818 ms 00:28:45.181 [2024-07-26 23:36:36.707591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.707636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.707648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:45.181 [2024-07-26 23:36:36.707659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:45.181 [2024-07-26 23:36:36.707669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.707718] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:45.181 [2024-07-26 23:36:36.707743] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:28:45.181 [2024-07-26 23:36:36.707797] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:45.181 [2024-07-26 23:36:36.707829] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:28:45.181 [2024-07-26 23:36:36.707905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:45.181 [2024-07-26 23:36:36.707919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:45.181 [2024-07-26 23:36:36.707931] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:45.181 [2024-07-26 23:36:36.707945] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:45.181 [2024-07-26 23:36:36.707956] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:45.181 [2024-07-26 23:36:36.707985] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:45.181 [2024-07-26 23:36:36.707995] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:45.181 [2024-07-26 23:36:36.708005] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:45.181 [2024-07-26 23:36:36.708019] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:45.181 [2024-07-26 23:36:36.708030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.708043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:45.181 [2024-07-26 23:36:36.708054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.314 ms 00:28:45.181 [2024-07-26 23:36:36.708063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.708130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.181 [2024-07-26 23:36:36.708142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:45.181 [2024-07-26 23:36:36.708152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:45.181 [2024-07-26 23:36:36.708161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.181 [2024-07-26 23:36:36.708228] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:45.181 [2024-07-26 23:36:36.708243] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:45.181 [2024-07-26 23:36:36.708257] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.181 [2024-07-26 23:36:36.708268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:45.181 [2024-07-26 23:36:36.708289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:45.181 [2024-07-26 23:36:36.708309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:45.181 [2024-07-26 23:36:36.708318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:45.181 [2024-07-26 23:36:36.708328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708337] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:45.181 [2024-07-26 23:36:36.708346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:45.181 [2024-07-26 23:36:36.708355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:45.181 [2024-07-26 23:36:36.708375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708392] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:45.181 [2024-07-26 23:36:36.708401] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:45.181 [2024-07-26 23:36:36.708410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.181 [2024-07-26 23:36:36.708418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:45.181 [2024-07-26 23:36:36.708428] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:45.181 [2024-07-26 23:36:36.708437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:45.181 [2024-07-26 23:36:36.708446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:45.181 [2024-07-26 23:36:36.708455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:45.181 [2024-07-26 23:36:36.708463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:45.182 [2024-07-26 23:36:36.708480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:45.182 [2024-07-26 23:36:36.708489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:45.182 [2024-07-26 23:36:36.708507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:45.182 [2024-07-26 23:36:36.708516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:45.182 [2024-07-26 23:36:36.708533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:45.182 [2024-07-26 23:36:36.708541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:45.182 [2024-07-26 23:36:36.708560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:45.182 [2024-07-26 23:36:36.708569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.182 [2024-07-26 23:36:36.708577] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:45.182 [2024-07-26 23:36:36.708586] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.182 [2024-07-26 23:36:36.708604] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:45.182 [2024-07-26 23:36:36.708615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:45.182 [2024-07-26 23:36:36.708624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:45.182 [2024-07-26 23:36:36.708645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:45.182 [2024-07-26 23:36:36.708655] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:45.182 [2024-07-26 23:36:36.708664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:45.182 [2024-07-26 23:36:36.708673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:45.182 [2024-07-26 23:36:36.708683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:45.182 [2024-07-26 23:36:36.708692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:45.182 [2024-07-26 23:36:36.708702] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:45.182 [2024-07-26 23:36:36.708714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708726] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:45.182 [2024-07-26 23:36:36.708736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708755] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:45.182 [2024-07-26 23:36:36.708765] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:45.182 [2024-07-26 23:36:36.708774] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:45.182 [2024-07-26 23:36:36.708784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:45.182 [2024-07-26 23:36:36.708794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708844] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:45.182 [2024-07-26 23:36:36.708855] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:45.182 [2024-07-26 23:36:36.708865] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:45.182 [2024-07-26 23:36:36.708876] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:45.182 [2024-07-26 23:36:36.708903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:45.182 [2024-07-26 23:36:36.708914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:45.182 [2024-07-26 23:36:36.708924] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:45.182 [2024-07-26 23:36:36.708935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.708947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:45.182 [2024-07-26 23:36:36.708958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 00:28:45.182 [2024-07-26 23:36:36.708982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.731898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.731932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:45.182 [2024-07-26 23:36:36.731945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.907 ms 00:28:45.182 [2024-07-26 23:36:36.731955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.732004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.732015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:45.182 [2024-07-26 23:36:36.732026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:45.182 [2024-07-26 23:36:36.732036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.779876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.779909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:45.182 [2024-07-26 23:36:36.779924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 47.869 ms 00:28:45.182 [2024-07-26 23:36:36.779935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.779961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.779988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:45.182 [2024-07-26 23:36:36.779997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:45.182 [2024-07-26 23:36:36.780007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.780463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.780484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:45.182 [2024-07-26 23:36:36.780494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:28:45.182 [2024-07-26 23:36:36.780507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.780545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.780556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:45.182 [2024-07-26 23:36:36.780565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:45.182 [2024-07-26 23:36:36.780575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.802878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.802909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:45.182 [2024-07-26 23:36:36.802922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.316 ms 00:28:45.182 [2024-07-26 23:36:36.802932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.821179] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:45.182 [2024-07-26 23:36:36.821217] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:45.182 [2024-07-26 23:36:36.821231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.821242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:45.182 [2024-07-26 23:36:36.821253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.215 ms 00:28:45.182 [2024-07-26 23:36:36.821263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.840374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.840410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:45.182 [2024-07-26 23:36:36.840434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.093 ms 00:28:45.182 [2024-07-26 23:36:36.840444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.857618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.857652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:45.182 [2024-07-26 23:36:36.857664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.158 ms 00:28:45.182 [2024-07-26 23:36:36.857673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.874644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.874679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:45.182 [2024-07-26 23:36:36.874691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.960 ms 00:28:45.182 [2024-07-26 23:36:36.874700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.182 [2024-07-26 23:36:36.875166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.182 [2024-07-26 23:36:36.875183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:45.182 [2024-07-26 23:36:36.875194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:28:45.182 [2024-07-26 23:36:36.875203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.958698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.958737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:45.442 [2024-07-26 23:36:36.958751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 83.609 ms 00:28:45.442 [2024-07-26 23:36:36.958762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.969920] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:45.442 [2024-07-26 23:36:36.970534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.970560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:45.442 [2024-07-26 23:36:36.970572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.748 ms 00:28:45.442 [2024-07-26 23:36:36.970582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.970643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.970659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:45.442 [2024-07-26 23:36:36.970670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:45.442 [2024-07-26 23:36:36.970681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.970735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.970747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:45.442 [2024-07-26 23:36:36.970758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:45.442 [2024-07-26 23:36:36.970767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.972742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.972774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:45.442 [2024-07-26 23:36:36.972789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.960 ms 00:28:45.442 [2024-07-26 23:36:36.972797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.442 [2024-07-26 23:36:36.972826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.442 [2024-07-26 23:36:36.972837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:45.442 [2024-07-26 23:36:36.972847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:45.443 [2024-07-26 23:36:36.972858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.443 [2024-07-26 23:36:36.972896] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:45.443 [2024-07-26 23:36:36.972907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.443 [2024-07-26 23:36:36.972916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:45.443 [2024-07-26 23:36:36.972926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:45.443 [2024-07-26 23:36:36.972939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.443 [2024-07-26 23:36:37.007696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.443 [2024-07-26 23:36:37.007830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:45.443 [2024-07-26 23:36:37.007850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 34.793 ms 00:28:45.443 [2024-07-26 23:36:37.007868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.443 [2024-07-26 23:36:37.007987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.443 [2024-07-26 23:36:37.008017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:45.443 [2024-07-26 23:36:37.008034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:45.443 [2024-07-26 23:36:37.008044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.443 [2024-07-26 23:36:37.009110] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 338.153 ms, result 0 00:28:45.443 [2024-07-26 23:36:37.024137] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.443 [2024-07-26 23:36:37.040137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:28:45.443 [2024-07-26 23:36:37.049268] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:45.703 23:36:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:45.703 23:36:37 -- common/autotest_common.sh@852 -- # return 0 00:28:45.703 23:36:37 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:45.703 23:36:37 -- ftl/common.sh@95 -- # return 0 00:28:45.703 23:36:37 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:45.703 [2024-07-26 23:36:37.425775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.703 [2024-07-26 23:36:37.425824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:45.703 [2024-07-26 23:36:37.425838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:45.703 [2024-07-26 23:36:37.425847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.703 [2024-07-26 23:36:37.425872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.703 [2024-07-26 23:36:37.425882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:45.703 [2024-07-26 23:36:37.425892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:45.703 [2024-07-26 23:36:37.425902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.703 [2024-07-26 23:36:37.425921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.703 [2024-07-26 23:36:37.425931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:45.703 [2024-07-26 23:36:37.425940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:45.703 [2024-07-26 23:36:37.425954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.703 [2024-07-26 23:36:37.426022] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.240 ms, result 0 00:28:45.703 true 00:28:45.703 23:36:37 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:45.962 { 00:28:45.962 "name": "ftl", 00:28:45.962 "properties": [ 00:28:45.962 { 00:28:45.962 "name": "superblock_version", 00:28:45.962 "value": 5, 00:28:45.962 "read-only": true 00:28:45.962 }, 00:28:45.962 { 00:28:45.962 "name": "base_device", 00:28:45.962 "bands": [ 00:28:45.962 { 00:28:45.962 "id": 0, 00:28:45.962 "state": "CLOSED", 00:28:45.963 "validity": 1.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 1, 00:28:45.963 "state": "CLOSED", 00:28:45.963 "validity": 1.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 2, 00:28:45.963 "state": "CLOSED", 00:28:45.963 "validity": 0.007843137254901933 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 3, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 4, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 5, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 6, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 7, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 8, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 9, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 10, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 11, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 12, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 13, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 14, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 15, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 16, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 17, 00:28:45.963 "state": "FREE", 00:28:45.963 "validity": 0.0 00:28:45.963 } 00:28:45.963 ], 00:28:45.963 "read-only": true 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "name": "cache_device", 00:28:45.963 "type": "bdev", 00:28:45.963 "chunks": [ 00:28:45.963 { 00:28:45.963 "id": 0, 00:28:45.963 "state": "OPEN", 00:28:45.963 "utilization": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 1, 00:28:45.963 "state": "OPEN", 00:28:45.963 "utilization": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 2, 00:28:45.963 "state": "FREE", 00:28:45.963 "utilization": 0.0 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "id": 3, 00:28:45.963 "state": "FREE", 00:28:45.963 "utilization": 0.0 00:28:45.963 } 00:28:45.963 ], 00:28:45.963 "read-only": true 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "name": "verbose_mode", 00:28:45.963 "value": true, 00:28:45.963 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:45.963 }, 00:28:45.963 { 00:28:45.963 "name": "prep_upgrade_on_shutdown", 00:28:45.963 "value": false, 00:28:45.963 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:45.963 } 00:28:45.963 ] 00:28:45.963 } 00:28:45.963 23:36:37 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:45.963 23:36:37 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:45.963 23:36:37 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:46.223 23:36:37 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:46.223 23:36:37 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:46.223 23:36:37 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:46.223 23:36:37 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:46.223 23:36:37 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:46.483 Validate MD5 checksum, iteration 1 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:46.483 23:36:38 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:46.483 23:36:38 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:46.483 23:36:38 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:46.483 23:36:38 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:46.483 23:36:38 -- ftl/common.sh@154 -- # return 0 00:28:46.483 23:36:38 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:46.483 [2024-07-26 23:36:38.095880] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:46.483 [2024-07-26 23:36:38.096209] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79827 ] 00:28:46.742 [2024-07-26 23:36:38.268012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.742 [2024-07-26 23:36:38.483904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.410  Copying: 614/1024 [MB] (614 MBps) Copying: 1024/1024 [MB] (average 616 MBps) 00:28:53.410 00:28:53.410 23:36:44 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:53.410 23:36:44 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@103 -- # sum=940f8e42f34ad157fed9c927c2a3c92a 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@105 -- # [[ 940f8e42f34ad157fed9c927c2a3c92a != \9\4\0\f\8\e\4\2\f\3\4\a\d\1\5\7\f\e\d\9\c\9\2\7\c\2\a\3\c\9\2\a ]] 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:54.787 Validate MD5 checksum, iteration 2 00:28:54.787 23:36:46 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:54.787 23:36:46 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:54.787 23:36:46 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:54.787 23:36:46 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:54.787 23:36:46 -- ftl/common.sh@154 -- # return 0 00:28:54.787 23:36:46 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:54.787 [2024-07-26 23:36:46.336413] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:54.787 [2024-07-26 23:36:46.336531] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79915 ] 00:28:54.787 [2024-07-26 23:36:46.507413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.046 [2024-07-26 23:36:46.710063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.895  Copying: 643/1024 [MB] (643 MBps) Copying: 1024/1024 [MB] (average 631 MBps) 00:28:58.895 00:28:59.153 23:36:50 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:59.153 23:36:50 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@103 -- # sum=edac69690ee97a74d157e9e3c7dc6984 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@105 -- # [[ edac69690ee97a74d157e9e3c7dc6984 != \e\d\a\c\6\9\6\9\0\e\e\9\7\a\7\4\d\1\5\7\e\9\e\3\c\7\d\c\6\9\8\4 ]] 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:01.055 23:36:52 -- ftl/common.sh@137 -- # [[ -n 79781 ]] 00:29:01.055 23:36:52 -- ftl/common.sh@138 -- # kill -9 79781 00:29:01.055 23:36:52 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:01.055 23:36:52 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:01.055 23:36:52 -- ftl/common.sh@81 -- # local base_bdev= 00:29:01.055 23:36:52 -- ftl/common.sh@82 -- # local cache_bdev= 00:29:01.055 23:36:52 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:01.055 23:36:52 -- ftl/common.sh@89 -- # spdk_tgt_pid=79982 00:29:01.055 23:36:52 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:01.055 23:36:52 -- ftl/common.sh@91 -- # waitforlisten 79982 00:29:01.055 23:36:52 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:01.055 23:36:52 -- common/autotest_common.sh@819 -- # '[' -z 79982 ']' 00:29:01.055 23:36:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.055 23:36:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:01.055 23:36:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.055 23:36:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:01.055 23:36:52 -- common/autotest_common.sh@10 -- # set +x 00:29:01.055 [2024-07-26 23:36:52.479797] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:01.055 [2024-07-26 23:36:52.480104] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79982 ] 00:29:01.055 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 79781 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:01.055 [2024-07-26 23:36:52.654831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.314 [2024-07-26 23:36:52.906840] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:01.314 [2024-07-26 23:36:52.907270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.721 [2024-07-26 23:36:54.036230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:02.721 [2024-07-26 23:36:54.036301] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:02.721 [2024-07-26 23:36:54.177090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.721 [2024-07-26 23:36:54.177133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:02.721 [2024-07-26 23:36:54.177149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:02.721 [2024-07-26 23:36:54.177159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.721 [2024-07-26 23:36:54.177217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.721 [2024-07-26 23:36:54.177237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:02.721 [2024-07-26 23:36:54.177247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:02.721 [2024-07-26 23:36:54.177260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.721 [2024-07-26 23:36:54.177281] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:02.721 [2024-07-26 23:36:54.178353] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:02.721 [2024-07-26 23:36:54.178380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.721 [2024-07-26 23:36:54.178394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:02.721 [2024-07-26 23:36:54.178405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:29:02.721 [2024-07-26 23:36:54.178415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.721 [2024-07-26 23:36:54.178737] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:02.721 [2024-07-26 23:36:54.205742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.721 [2024-07-26 23:36:54.205786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:02.722 [2024-07-26 23:36:54.205802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.049 ms 00:29:02.722 [2024-07-26 23:36:54.205812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.219547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.219579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:02.722 [2024-07-26 23:36:54.219591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:02.722 [2024-07-26 23:36:54.219600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.220241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.220290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:02.722 [2024-07-26 23:36:54.220323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:29:02.722 [2024-07-26 23:36:54.220353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.220481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.220520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:02.722 [2024-07-26 23:36:54.220550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:02.722 [2024-07-26 23:36:54.220580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.220632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.220759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:02.722 [2024-07-26 23:36:54.220790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:02.722 [2024-07-26 23:36:54.220802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.220834] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:02.722 [2024-07-26 23:36:54.226243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.226395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:02.722 [2024-07-26 23:36:54.226520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.428 ms 00:29:02.722 [2024-07-26 23:36:54.226556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.226619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.226652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:02.722 [2024-07-26 23:36:54.226682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:02.722 [2024-07-26 23:36:54.226711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.226771] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:02.722 [2024-07-26 23:36:54.226827] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:29:02.722 [2024-07-26 23:36:54.227007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:02.722 [2024-07-26 23:36:54.227080] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:29:02.722 [2024-07-26 23:36:54.227193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:29:02.722 [2024-07-26 23:36:54.227243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:02.722 [2024-07-26 23:36:54.227291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:29:02.722 [2024-07-26 23:36:54.227349] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:02.722 [2024-07-26 23:36:54.227449] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:02.722 [2024-07-26 23:36:54.227499] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:02.722 [2024-07-26 23:36:54.227545] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:02.722 [2024-07-26 23:36:54.227575] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:29:02.722 [2024-07-26 23:36:54.227616] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:29:02.722 [2024-07-26 23:36:54.227647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.227676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:02.722 [2024-07-26 23:36:54.227705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.880 ms 00:29:02.722 [2024-07-26 23:36:54.227827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.228021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.722 [2024-07-26 23:36:54.228063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:02.722 [2024-07-26 23:36:54.228093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:29:02.722 [2024-07-26 23:36:54.228122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.722 [2024-07-26 23:36:54.228213] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:02.722 [2024-07-26 23:36:54.228301] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:02.722 [2024-07-26 23:36:54.228344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.722 [2024-07-26 23:36:54.228374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228405] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:02.722 [2024-07-26 23:36:54.228435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:02.722 [2024-07-26 23:36:54.228492] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:02.722 [2024-07-26 23:36:54.228521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:02.722 [2024-07-26 23:36:54.228549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228635] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:02.722 [2024-07-26 23:36:54.228670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:02.722 [2024-07-26 23:36:54.228699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228727] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:02.722 [2024-07-26 23:36:54.228755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:02.722 [2024-07-26 23:36:54.228841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:29:02.722 [2024-07-26 23:36:54.228869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.228939] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:29:02.722 [2024-07-26 23:36:54.228985] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:29:02.722 [2024-07-26 23:36:54.229015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:02.722 [2024-07-26 23:36:54.229072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:02.722 [2024-07-26 23:36:54.229100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:02.722 [2024-07-26 23:36:54.229226] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:29:02.722 [2024-07-26 23:36:54.229254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:02.722 [2024-07-26 23:36:54.229309] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:02.722 [2024-07-26 23:36:54.229337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229364] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:02.722 [2024-07-26 23:36:54.229426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:29:02.722 [2024-07-26 23:36:54.229493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:02.722 [2024-07-26 23:36:54.229583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:02.722 [2024-07-26 23:36:54.229616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.229646] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:02.722 [2024-07-26 23:36:54.229674] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.229783] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:02.722 [2024-07-26 23:36:54.229818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:02.722 [2024-07-26 23:36:54.229854] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:02.722 [2024-07-26 23:36:54.229883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:02.722 [2024-07-26 23:36:54.229923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:02.722 [2024-07-26 23:36:54.230004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:02.722 [2024-07-26 23:36:54.230038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:02.722 [2024-07-26 23:36:54.230082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:02.722 [2024-07-26 23:36:54.230112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:02.722 [2024-07-26 23:36:54.230140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:02.722 [2024-07-26 23:36:54.230198] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:02.722 [2024-07-26 23:36:54.230325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.722 [2024-07-26 23:36:54.230429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:02.722 [2024-07-26 23:36:54.230478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:29:02.722 [2024-07-26 23:36:54.230525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.230571] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:29:02.723 [2024-07-26 23:36:54.230659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:29:02.723 [2024-07-26 23:36:54.230708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:29:02.723 [2024-07-26 23:36:54.230754] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:29:02.723 [2024-07-26 23:36:54.230800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.230919] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.231007] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.231056] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.231102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:29:02.723 [2024-07-26 23:36:54.231189] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:29:02.723 [2024-07-26 23:36:54.231204] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:02.723 [2024-07-26 23:36:54.231216] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.231227] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.723 [2024-07-26 23:36:54.231240] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:02.723 [2024-07-26 23:36:54.231251] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:02.723 [2024-07-26 23:36:54.231262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:02.723 [2024-07-26 23:36:54.231275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.231285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:02.723 [2024-07-26 23:36:54.231296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.102 ms 00:29:02.723 [2024-07-26 23:36:54.231306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.258077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.258188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:02.723 [2024-07-26 23:36:54.258284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.726 ms 00:29:02.723 [2024-07-26 23:36:54.258319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.258378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.258415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:02.723 [2024-07-26 23:36:54.258444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:02.723 [2024-07-26 23:36:54.258472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.314595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.314725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:02.723 [2024-07-26 23:36:54.314836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 56.143 ms 00:29:02.723 [2024-07-26 23:36:54.314871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.314928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.314959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:02.723 [2024-07-26 23:36:54.315012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.723 [2024-07-26 23:36:54.315082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.315220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.315257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:02.723 [2024-07-26 23:36:54.315287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:29:02.723 [2024-07-26 23:36:54.315368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.315443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.315475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:02.723 [2024-07-26 23:36:54.315504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:02.723 [2024-07-26 23:36:54.315579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.344084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.344218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:02.723 [2024-07-26 23:36:54.344343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.462 ms 00:29:02.723 [2024-07-26 23:36:54.344386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.344534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.344586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:02.723 [2024-07-26 23:36:54.344640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:02.723 [2024-07-26 23:36:54.344670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.371709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.371844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:02.723 [2024-07-26 23:36:54.372012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.041 ms 00:29:02.723 [2024-07-26 23:36:54.372051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.723 [2024-07-26 23:36:54.385390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.723 [2024-07-26 23:36:54.385549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:02.723 [2024-07-26 23:36:54.385655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.434 ms 00:29:02.723 [2024-07-26 23:36:54.385691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.480754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.480989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:02.984 [2024-07-26 23:36:54.481202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 95.133 ms 00:29:02.984 [2024-07-26 23:36:54.481251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.481418] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:02.984 [2024-07-26 23:36:54.481578] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:02.984 [2024-07-26 23:36:54.481675] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:02.984 [2024-07-26 23:36:54.481772] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:02.984 [2024-07-26 23:36:54.481907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.481949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:02.984 [2024-07-26 23:36:54.482018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:29:02.984 [2024-07-26 23:36:54.482070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.482269] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:02.984 [2024-07-26 23:36:54.482360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.482403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:02.984 [2024-07-26 23:36:54.482447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.091 ms 00:29:02.984 [2024-07-26 23:36:54.482546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.504168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.504320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:02.984 [2024-07-26 23:36:54.504343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.568 ms 00:29:02.984 [2024-07-26 23:36:54.504354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.518346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.518381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:02.984 [2024-07-26 23:36:54.518393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:02.984 [2024-07-26 23:36:54.518420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.518485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:02.984 [2024-07-26 23:36:54.518497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:29:02.984 [2024-07-26 23:36:54.518513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:02.984 [2024-07-26 23:36:54.518525] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:02.984 [2024-07-26 23:36:54.518833] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:29:03.551 [2024-07-26 23:36:55.087168] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:29:03.551 [2024-07-26 23:36:55.087333] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:29:04.118 [2024-07-26 23:36:55.676243] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:29:04.118 [2024-07-26 23:36:55.676352] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:04.118 [2024-07-26 23:36:55.676369] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:04.118 [2024-07-26 23:36:55.676384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.676397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:04.118 [2024-07-26 23:36:55.676415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1159.715 ms 00:29:04.118 [2024-07-26 23:36:55.676426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.676464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.676475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:04.118 [2024-07-26 23:36:55.676496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:04.118 [2024-07-26 23:36:55.676508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.689866] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:04.118 [2024-07-26 23:36:55.690197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.690248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:04.118 [2024-07-26 23:36:55.690264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.692 ms 00:29:04.118 [2024-07-26 23:36:55.690274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.690879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.690894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:29:04.118 [2024-07-26 23:36:55.690905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:29:04.118 [2024-07-26 23:36:55.690920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.692967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.693007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:04.118 [2024-07-26 23:36:55.693020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.031 ms 00:29:04.118 [2024-07-26 23:36:55.693030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.729835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.729872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:29:04.118 [2024-07-26 23:36:55.729892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.836 ms 00:29:04.118 [2024-07-26 23:36:55.729902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.730031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.730046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:04.118 [2024-07-26 23:36:55.730057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:04.118 [2024-07-26 23:36:55.730068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.732712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.732754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:29:04.118 [2024-07-26 23:36:55.732766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.631 ms 00:29:04.118 [2024-07-26 23:36:55.732797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.732831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.732842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:04.118 [2024-07-26 23:36:55.732852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:04.118 [2024-07-26 23:36:55.732862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.732903] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:04.118 [2024-07-26 23:36:55.732915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.732926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:04.118 [2024-07-26 23:36:55.732935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:04.118 [2024-07-26 23:36:55.732945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.733014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:04.118 [2024-07-26 23:36:55.733026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:04.118 [2024-07-26 23:36:55.733036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:04.118 [2024-07-26 23:36:55.733046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:04.118 [2024-07-26 23:36:55.734225] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1559.163 ms, result 0 00:29:04.118 [2024-07-26 23:36:55.746532] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.118 [2024-07-26 23:36:55.762490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:29:04.118 [2024-07-26 23:36:55.772119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:04.376 Validate MD5 checksum, iteration 1 00:29:04.376 23:36:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.376 23:36:56 -- common/autotest_common.sh@852 -- # return 0 00:29:04.376 23:36:56 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:04.376 23:36:56 -- ftl/common.sh@95 -- # return 0 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:04.376 23:36:56 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.376 23:36:56 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:04.376 23:36:56 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:04.376 23:36:56 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:04.376 23:36:56 -- ftl/common.sh@154 -- # return 0 00:29:04.376 23:36:56 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:04.376 [2024-07-26 23:36:56.117684] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:04.376 [2024-07-26 23:36:56.118044] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80030 ] 00:29:04.634 [2024-07-26 23:36:56.285254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.892 [2024-07-26 23:36:56.491834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.346  Copying: 649/1024 [MB] (649 MBps) Copying: 1024/1024 [MB] (average 640 MBps) 00:29:10.346 00:29:10.346 23:37:01 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:10.346 23:37:01 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:12.246 Validate MD5 checksum, iteration 2 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@103 -- # sum=940f8e42f34ad157fed9c927c2a3c92a 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@105 -- # [[ 940f8e42f34ad157fed9c927c2a3c92a != \9\4\0\f\8\e\4\2\f\3\4\a\d\1\5\7\f\e\d\9\c\9\2\7\c\2\a\3\c\9\2\a ]] 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:12.246 23:37:03 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:12.246 23:37:03 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:12.246 23:37:03 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:12.246 23:37:03 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:12.246 23:37:03 -- ftl/common.sh@154 -- # return 0 00:29:12.246 23:37:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:12.246 [2024-07-26 23:37:03.697454] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.246 [2024-07-26 23:37:03.697720] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80112 ] 00:29:12.246 [2024-07-26 23:37:03.868559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.504 [2024-07-26 23:37:04.070659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.194  Copying: 646/1024 [MB] (646 MBps) Copying: 1024/1024 [MB] (average 634 MBps) 00:29:17.194 00:29:17.194 23:37:08 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:17.194 23:37:08 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@103 -- # sum=edac69690ee97a74d157e9e3c7dc6984 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@105 -- # [[ edac69690ee97a74d157e9e3c7dc6984 != \e\d\a\c\6\9\6\9\0\e\e\9\7\a\7\4\d\1\5\7\e\9\e\3\c\7\d\c\6\9\8\4 ]] 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:19.098 23:37:10 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:19.098 23:37:10 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:19.098 23:37:10 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:19.098 23:37:10 -- ftl/common.sh@130 -- # [[ -n 79982 ]] 00:29:19.098 23:37:10 -- ftl/common.sh@131 -- # killprocess 79982 00:29:19.098 23:37:10 -- common/autotest_common.sh@926 -- # '[' -z 79982 ']' 00:29:19.098 23:37:10 -- common/autotest_common.sh@930 -- # kill -0 79982 00:29:19.098 23:37:10 -- common/autotest_common.sh@931 -- # uname 00:29:19.098 23:37:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.098 23:37:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79982 00:29:19.098 killing process with pid 79982 00:29:19.098 23:37:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:19.098 23:37:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:19.098 23:37:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79982' 00:29:19.098 23:37:10 -- common/autotest_common.sh@945 -- # kill 79982 00:29:19.098 23:37:10 -- common/autotest_common.sh@950 -- # wait 79982 00:29:20.475 [2024-07-26 23:37:11.931592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:29:20.475 [2024-07-26 23:37:11.950471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.950514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:20.475 [2024-07-26 23:37:11.950530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:20.475 [2024-07-26 23:37:11.950544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.950567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:20.475 [2024-07-26 23:37:11.954590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.954618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:20.475 [2024-07-26 23:37:11.954630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.013 ms 00:29:20.475 [2024-07-26 23:37:11.954655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.954900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.954912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:20.475 [2024-07-26 23:37:11.954922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.208 ms 00:29:20.475 [2024-07-26 23:37:11.954932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.956177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.956221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:20.475 [2024-07-26 23:37:11.956234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:29:20.475 [2024-07-26 23:37:11.956244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.957189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.957209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:29:20.475 [2024-07-26 23:37:11.957220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:29:20.475 [2024-07-26 23:37:11.957230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.972299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.972335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:20.475 [2024-07-26 23:37:11.972349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.049 ms 00:29:20.475 [2024-07-26 23:37:11.972359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.980434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.980470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:20.475 [2024-07-26 23:37:11.980482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.051 ms 00:29:20.475 [2024-07-26 23:37:11.980492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.980571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.980591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:20.475 [2024-07-26 23:37:11.980602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:20.475 [2024-07-26 23:37:11.980611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:11.995390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:11.995421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:20.475 [2024-07-26 23:37:11.995433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.785 ms 00:29:20.475 [2024-07-26 23:37:11.995457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:12.010532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:12.010559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:20.475 [2024-07-26 23:37:12.010571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.066 ms 00:29:20.475 [2024-07-26 23:37:12.010579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.475 [2024-07-26 23:37:12.024714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.475 [2024-07-26 23:37:12.024744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:20.476 [2024-07-26 23:37:12.024756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.110 ms 00:29:20.476 [2024-07-26 23:37:12.024764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.038947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.476 [2024-07-26 23:37:12.038982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:20.476 [2024-07-26 23:37:12.038994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.144 ms 00:29:20.476 [2024-07-26 23:37:12.039004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.039038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:20.476 [2024-07-26 23:37:12.039055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:20.476 [2024-07-26 23:37:12.039068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:20.476 [2024-07-26 23:37:12.039080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:20.476 [2024-07-26 23:37:12.039090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:20.476 [2024-07-26 23:37:12.039248] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:20.476 [2024-07-26 23:37:12.039273] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c61e8e01-b804-4543-b8c2-7c525c37f0d0 00:29:20.476 [2024-07-26 23:37:12.039284] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:20.476 [2024-07-26 23:37:12.039294] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:20.476 [2024-07-26 23:37:12.039303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:20.476 [2024-07-26 23:37:12.039317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:20.476 [2024-07-26 23:37:12.039326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:20.476 [2024-07-26 23:37:12.039337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:20.476 [2024-07-26 23:37:12.039347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:20.476 [2024-07-26 23:37:12.039356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:20.476 [2024-07-26 23:37:12.039367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:20.476 [2024-07-26 23:37:12.039377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.476 [2024-07-26 23:37:12.039387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:20.476 [2024-07-26 23:37:12.039398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:29:20.476 [2024-07-26 23:37:12.039408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.059296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.476 [2024-07-26 23:37:12.059334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:20.476 [2024-07-26 23:37:12.059348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.902 ms 00:29:20.476 [2024-07-26 23:37:12.059358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.059631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:20.476 [2024-07-26 23:37:12.059643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:20.476 [2024-07-26 23:37:12.059654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:29:20.476 [2024-07-26 23:37:12.059664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.130452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.476 [2024-07-26 23:37:12.130491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:20.476 [2024-07-26 23:37:12.130506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.476 [2024-07-26 23:37:12.130516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.130554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.476 [2024-07-26 23:37:12.130565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:20.476 [2024-07-26 23:37:12.130575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.476 [2024-07-26 23:37:12.130585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.130665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.476 [2024-07-26 23:37:12.130679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:20.476 [2024-07-26 23:37:12.130695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.476 [2024-07-26 23:37:12.130705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.476 [2024-07-26 23:37:12.130724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.476 [2024-07-26 23:37:12.130734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:20.476 [2024-07-26 23:37:12.130744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.476 [2024-07-26 23:37:12.130754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.256273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.256335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:20.735 [2024-07-26 23:37:12.256351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.256363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.303464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.303514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:20.735 [2024-07-26 23:37:12.303529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.303540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.303644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.303657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:20.735 [2024-07-26 23:37:12.303668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.303685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.303732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.303744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:20.735 [2024-07-26 23:37:12.303754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.303764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.303887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.303902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:20.735 [2024-07-26 23:37:12.303912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.303922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.303989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.304002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:20.735 [2024-07-26 23:37:12.304030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.304040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.304088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.304100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:20.735 [2024-07-26 23:37:12.304111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.304122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.304181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:20.735 [2024-07-26 23:37:12.304194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:20.735 [2024-07-26 23:37:12.304205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:20.735 [2024-07-26 23:37:12.304216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:20.735 [2024-07-26 23:37:12.304365] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 354.427 ms, result 0 00:29:22.123 23:37:13 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:22.123 23:37:13 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:22.123 23:37:13 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:22.123 23:37:13 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:22.123 23:37:13 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:22.123 23:37:13 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:22.123 Remove shared memory files 00:29:22.123 23:37:13 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:22.123 23:37:13 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:22.123 23:37:13 -- ftl/common.sh@205 -- # rm -f rm -f 00:29:22.123 23:37:13 -- ftl/common.sh@206 -- # rm -f rm -f 00:29:22.123 23:37:13 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79781 00:29:22.123 23:37:13 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:22.123 23:37:13 -- ftl/common.sh@209 -- # rm -f rm -f 00:29:22.123 ************************************ 00:29:22.123 END TEST ftl_upgrade_shutdown 00:29:22.123 ************************************ 00:29:22.123 00:29:22.123 real 1m29.153s 00:29:22.123 user 2m1.084s 00:29:22.123 sys 0m24.248s 00:29:22.123 23:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.123 23:37:13 -- common/autotest_common.sh@10 -- # set +x 00:29:22.123 23:37:13 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:29:22.123 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:29:22.123 23:37:13 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:29:22.123 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:29:22.123 23:37:13 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:22.123 23:37:13 -- ftl/ftl.sh@14 -- # killprocess 71984 00:29:22.123 23:37:13 -- common/autotest_common.sh@926 -- # '[' -z 71984 ']' 00:29:22.123 Process with pid 71984 is not found 00:29:22.123 23:37:13 -- common/autotest_common.sh@930 -- # kill -0 71984 00:29:22.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71984) - No such process 00:29:22.123 23:37:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71984 is not found' 00:29:22.123 23:37:13 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:29:22.123 23:37:13 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80253 00:29:22.123 23:37:13 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:22.123 23:37:13 -- ftl/ftl.sh@20 -- # waitforlisten 80253 00:29:22.123 23:37:13 -- common/autotest_common.sh@819 -- # '[' -z 80253 ']' 00:29:22.123 23:37:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.123 23:37:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.123 23:37:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.123 23:37:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.123 23:37:13 -- common/autotest_common.sh@10 -- # set +x 00:29:22.123 [2024-07-26 23:37:13.868972] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:22.123 [2024-07-26 23:37:13.869099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80253 ] 00:29:22.382 [2024-07-26 23:37:14.043074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.641 [2024-07-26 23:37:14.304152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:22.641 [2024-07-26 23:37:14.304379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.541 23:37:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:24.541 23:37:15 -- common/autotest_common.sh@852 -- # return 0 00:29:24.541 23:37:15 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:29:24.541 nvme0n1 00:29:24.541 23:37:16 -- ftl/ftl.sh@22 -- # clear_lvols 00:29:24.541 23:37:16 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:24.541 23:37:16 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:24.800 23:37:16 -- ftl/common.sh@28 -- # stores=b2a262e7-6a31-426a-8a37-9986c4ad78b1 00:29:24.800 23:37:16 -- ftl/common.sh@29 -- # for lvs in $stores 00:29:24.800 23:37:16 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2a262e7-6a31-426a-8a37-9986c4ad78b1 00:29:25.059 23:37:16 -- ftl/ftl.sh@23 -- # killprocess 80253 00:29:25.059 23:37:16 -- common/autotest_common.sh@926 -- # '[' -z 80253 ']' 00:29:25.059 23:37:16 -- common/autotest_common.sh@930 -- # kill -0 80253 00:29:25.059 23:37:16 -- common/autotest_common.sh@931 -- # uname 00:29:25.059 23:37:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.059 23:37:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80253 00:29:25.059 killing process with pid 80253 00:29:25.059 23:37:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.059 23:37:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.059 23:37:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80253' 00:29:25.059 23:37:16 -- common/autotest_common.sh@945 -- # kill 80253 00:29:25.059 23:37:16 -- common/autotest_common.sh@950 -- # wait 80253 00:29:27.592 23:37:19 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:27.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:27.852 Waiting for block devices as requested 00:29:27.852 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.111 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.111 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:28.371 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.669 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:29:33.669 23:37:25 -- ftl/ftl.sh@28 -- # remove_shm 00:29:33.669 23:37:25 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:33.669 Remove shared memory files 00:29:33.669 23:37:25 -- ftl/common.sh@205 -- # rm -f rm -f 00:29:33.669 23:37:25 -- ftl/common.sh@206 -- # rm -f rm -f 00:29:33.669 23:37:25 -- ftl/common.sh@207 -- # rm -f rm -f 00:29:33.669 23:37:25 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:33.669 23:37:25 -- ftl/common.sh@209 -- # rm -f rm -f 00:29:33.669 ************************************ 00:29:33.669 END TEST ftl 00:29:33.669 ************************************ 00:29:33.669 00:29:33.669 real 12m10.005s 00:29:33.669 user 14m45.321s 00:29:33.669 sys 1m32.644s 00:29:33.669 23:37:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.669 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:33.669 23:37:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:33.669 23:37:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:33.669 23:37:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:33.669 23:37:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:33.669 23:37:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:33.669 23:37:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:33.669 23:37:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:33.669 23:37:25 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:33.669 23:37:25 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:33.669 23:37:25 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:33.669 23:37:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:33.669 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:33.669 23:37:25 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:33.669 23:37:25 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:33.669 23:37:25 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:33.669 23:37:25 -- common/autotest_common.sh@10 -- # set +x 00:29:35.576 INFO: APP EXITING 00:29:35.576 INFO: killing all VMs 00:29:35.576 INFO: killing vhost app 00:29:35.576 INFO: EXIT DONE 00:29:36.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:36.515 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:29:36.515 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:29:36.774 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:29:36.774 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:29:37.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:37.713 Cleaning 00:29:37.713 Removing: /var/run/dpdk/spdk0/config 00:29:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:37.713 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:37.713 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:37.713 Removing: /var/run/dpdk/spdk0 00:29:37.713 Removing: /var/run/dpdk/spdk_pid56287 00:29:37.713 Removing: /var/run/dpdk/spdk_pid56530 00:29:37.713 Removing: /var/run/dpdk/spdk_pid56835 00:29:37.713 Removing: /var/run/dpdk/spdk_pid56934 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57053 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57181 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57295 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57340 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57382 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57449 00:29:37.713 Removing: /var/run/dpdk/spdk_pid57577 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58012 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58104 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58193 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58224 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58385 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58408 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58567 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58596 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58660 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58699 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58767 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58800 00:29:37.713 Removing: /var/run/dpdk/spdk_pid58999 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59041 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59121 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59217 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59259 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59337 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59369 00:29:37.713 Removing: /var/run/dpdk/spdk_pid59415 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59451 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59499 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59536 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59577 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59614 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59666 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59693 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59744 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59781 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59828 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59859 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59906 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59943 00:29:37.973 Removing: /var/run/dpdk/spdk_pid59998 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60024 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60076 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60113 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60160 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60191 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60238 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60269 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60316 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60352 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60394 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60431 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60478 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60509 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60561 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60594 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60646 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60680 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60730 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60770 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60824 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60857 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60909 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60935 00:29:37.973 Removing: /var/run/dpdk/spdk_pid60988 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61075 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61190 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61362 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61465 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61510 00:29:37.973 Removing: /var/run/dpdk/spdk_pid61969 00:29:37.973 Removing: /var/run/dpdk/spdk_pid62145 00:29:37.973 Removing: /var/run/dpdk/spdk_pid62264 00:29:37.973 Removing: /var/run/dpdk/spdk_pid62319 00:29:37.973 Removing: /var/run/dpdk/spdk_pid62350 00:29:37.973 Removing: /var/run/dpdk/spdk_pid62425 00:29:37.973 Removing: /var/run/dpdk/spdk_pid63108 00:29:37.973 Removing: /var/run/dpdk/spdk_pid63156 00:29:37.973 Removing: /var/run/dpdk/spdk_pid63647 00:29:37.973 Removing: /var/run/dpdk/spdk_pid63757 00:29:37.973 Removing: /var/run/dpdk/spdk_pid63878 00:29:38.233 Removing: /var/run/dpdk/spdk_pid63931 00:29:38.233 Removing: /var/run/dpdk/spdk_pid63962 00:29:38.233 Removing: /var/run/dpdk/spdk_pid63993 00:29:38.233 Removing: /var/run/dpdk/spdk_pid65957 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66107 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66119 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66131 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66181 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66185 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66197 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66248 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66252 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66268 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66314 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66328 00:29:38.233 Removing: /var/run/dpdk/spdk_pid66341 00:29:38.233 Removing: /var/run/dpdk/spdk_pid67771 00:29:38.233 Removing: /var/run/dpdk/spdk_pid67872 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68017 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68121 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68231 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68346 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68484 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68559 00:29:38.233 Removing: /var/run/dpdk/spdk_pid68708 00:29:38.233 Removing: /var/run/dpdk/spdk_pid69108 00:29:38.233 Removing: /var/run/dpdk/spdk_pid69150 00:29:38.233 Removing: /var/run/dpdk/spdk_pid69621 00:29:38.233 Removing: /var/run/dpdk/spdk_pid69805 00:29:38.233 Removing: /var/run/dpdk/spdk_pid69920 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70031 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70101 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70132 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70424 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70505 00:29:38.233 Removing: /var/run/dpdk/spdk_pid70600 00:29:38.233 Removing: /var/run/dpdk/spdk_pid71016 00:29:38.233 Removing: /var/run/dpdk/spdk_pid71168 00:29:38.233 Removing: /var/run/dpdk/spdk_pid71984 00:29:38.233 Removing: /var/run/dpdk/spdk_pid72118 00:29:38.233 Removing: /var/run/dpdk/spdk_pid72354 00:29:38.233 Removing: /var/run/dpdk/spdk_pid72464 00:29:38.233 Removing: /var/run/dpdk/spdk_pid72794 00:29:38.233 Removing: /var/run/dpdk/spdk_pid73068 00:29:38.233 Removing: /var/run/dpdk/spdk_pid73428 00:29:38.233 Removing: /var/run/dpdk/spdk_pid73653 00:29:38.233 Removing: /var/run/dpdk/spdk_pid73809 00:29:38.233 Removing: /var/run/dpdk/spdk_pid73875 00:29:38.233 Removing: /var/run/dpdk/spdk_pid74029 00:29:38.233 Removing: /var/run/dpdk/spdk_pid74064 00:29:38.233 Removing: /var/run/dpdk/spdk_pid74140 00:29:38.233 Removing: /var/run/dpdk/spdk_pid74345 00:29:38.233 Removing: /var/run/dpdk/spdk_pid74606 00:29:38.493 Removing: /var/run/dpdk/spdk_pid75079 00:29:38.493 Removing: /var/run/dpdk/spdk_pid75584 00:29:38.493 Removing: /var/run/dpdk/spdk_pid76119 00:29:38.493 Removing: /var/run/dpdk/spdk_pid76672 00:29:38.493 Removing: /var/run/dpdk/spdk_pid76857 00:29:38.493 Removing: /var/run/dpdk/spdk_pid76947 00:29:38.493 Removing: /var/run/dpdk/spdk_pid77620 00:29:38.493 Removing: /var/run/dpdk/spdk_pid77693 00:29:38.493 Removing: /var/run/dpdk/spdk_pid78224 00:29:38.493 Removing: /var/run/dpdk/spdk_pid78631 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79193 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79323 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79381 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79443 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79511 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79575 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79781 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79827 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79915 00:29:38.493 Removing: /var/run/dpdk/spdk_pid79982 00:29:38.493 Removing: /var/run/dpdk/spdk_pid80030 00:29:38.493 Removing: /var/run/dpdk/spdk_pid80112 00:29:38.493 Removing: /var/run/dpdk/spdk_pid80253 00:29:38.493 Clean 00:29:38.493 killing process with pid 48290 00:29:38.493 killing process with pid 48295 00:29:38.752 23:37:30 -- common/autotest_common.sh@1436 -- # return 0 00:29:38.752 23:37:30 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:38.752 23:37:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.752 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:29:38.752 23:37:30 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:38.752 23:37:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.752 23:37:30 -- common/autotest_common.sh@10 -- # set +x 00:29:38.752 23:37:30 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:38.752 23:37:30 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:38.752 23:37:30 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:38.752 23:37:30 -- spdk/autotest.sh@394 -- # hash lcov 00:29:38.752 23:37:30 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:38.752 23:37:30 -- spdk/autotest.sh@396 -- # hostname 00:29:38.752 23:37:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:39.012 geninfo: WARNING: invalid characters removed from testname! 00:30:05.574 23:37:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:05.574 23:37:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:07.480 23:37:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:09.386 23:38:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:11.291 23:38:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:13.192 23:38:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:15.097 23:38:06 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:15.097 23:38:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:15.097 23:38:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:15.097 23:38:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.097 23:38:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.097 23:38:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.098 23:38:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.098 23:38:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.098 23:38:06 -- paths/export.sh@5 -- $ export PATH 00:30:15.098 23:38:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.098 23:38:06 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:15.098 23:38:06 -- common/autobuild_common.sh@438 -- $ date +%s 00:30:15.098 23:38:06 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722037086.XXXXXX 00:30:15.098 23:38:06 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722037086.Aqwb53 00:30:15.098 23:38:06 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:30:15.098 23:38:06 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:30:15.098 23:38:06 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:15.098 23:38:06 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:15.098 23:38:06 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:15.098 23:38:06 -- common/autobuild_common.sh@454 -- $ get_config_params 00:30:15.098 23:38:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:15.098 23:38:06 -- common/autotest_common.sh@10 -- $ set +x 00:30:15.098 23:38:06 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:30:15.098 23:38:06 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:15.098 23:38:06 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:15.098 23:38:06 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:15.098 23:38:06 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:15.098 23:38:06 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:15.098 23:38:06 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:15.098 23:38:06 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:15.098 23:38:06 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:15.098 23:38:06 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:15.357 23:38:06 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:15.357 + [[ -n 5103 ]] 00:30:15.357 + sudo kill 5103 00:30:15.368 [Pipeline] } 00:30:15.387 [Pipeline] // timeout 00:30:15.393 [Pipeline] } 00:30:15.411 [Pipeline] // stage 00:30:15.417 [Pipeline] } 00:30:15.434 [Pipeline] // catchError 00:30:15.444 [Pipeline] stage 00:30:15.446 [Pipeline] { (Stop VM) 00:30:15.461 [Pipeline] sh 00:30:15.747 + vagrant halt 00:30:19.038 ==> default: Halting domain... 00:30:25.682 [Pipeline] sh 00:30:25.965 + vagrant destroy -f 00:30:28.500 ==> default: Removing domain... 00:30:29.081 [Pipeline] sh 00:30:29.365 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:29.374 [Pipeline] } 00:30:29.393 [Pipeline] // stage 00:30:29.399 [Pipeline] } 00:30:29.417 [Pipeline] // dir 00:30:29.424 [Pipeline] } 00:30:29.442 [Pipeline] // wrap 00:30:29.450 [Pipeline] } 00:30:29.466 [Pipeline] // catchError 00:30:29.477 [Pipeline] stage 00:30:29.480 [Pipeline] { (Epilogue) 00:30:29.496 [Pipeline] sh 00:30:29.778 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:33.977 [Pipeline] catchError 00:30:33.979 [Pipeline] { 00:30:33.994 [Pipeline] sh 00:30:34.278 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:34.278 Artifacts sizes are good 00:30:34.287 [Pipeline] } 00:30:34.304 [Pipeline] // catchError 00:30:34.315 [Pipeline] archiveArtifacts 00:30:34.322 Archiving artifacts 00:30:34.435 [Pipeline] cleanWs 00:30:34.448 [WS-CLEANUP] Deleting project workspace... 00:30:34.448 [WS-CLEANUP] Deferred wipeout is used... 00:30:34.455 [WS-CLEANUP] done 00:30:34.457 [Pipeline] } 00:30:34.474 [Pipeline] // stage 00:30:34.480 [Pipeline] } 00:30:34.496 [Pipeline] // node 00:30:34.502 [Pipeline] End of Pipeline 00:30:34.559 Finished: SUCCESS