00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2037 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3297 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.056 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.089 Fetching changes from the remote Git repository 00:00:00.091 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.141 Using shallow fetch with depth 1 00:00:00.141 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.141 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.279 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.291 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.304 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:04.304 > git config core.sparsecheckout # timeout=10 00:00:04.316 > git read-tree -mu HEAD # timeout=10 00:00:04.333 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:04.353 Commit message: "packer: Add bios builder" 00:00:04.353 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:04.475 [Pipeline] Start of Pipeline 00:00:04.486 [Pipeline] library 00:00:04.487 Loading library shm_lib@master 00:00:07.184 Library shm_lib@master is cached. Copying from home. 00:00:07.216 [Pipeline] node 00:00:07.329 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.332 [Pipeline] { 00:00:07.346 [Pipeline] catchError 00:00:07.353 [Pipeline] { 00:00:07.368 [Pipeline] wrap 00:00:07.377 [Pipeline] { 00:00:07.389 [Pipeline] stage 00:00:07.391 [Pipeline] { (Prologue) 00:00:07.410 [Pipeline] echo 00:00:07.412 Node: VM-host-SM4 00:00:07.418 [Pipeline] cleanWs 00:00:07.425 [WS-CLEANUP] Deleting project workspace... 00:00:07.425 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.431 [WS-CLEANUP] done 00:00:07.589 [Pipeline] setCustomBuildProperty 00:00:07.644 [Pipeline] httpRequest 00:00:07.658 [Pipeline] echo 00:00:07.659 Sorcerer 10.211.164.101 is alive 00:00:07.666 [Pipeline] httpRequest 00:00:07.670 HttpMethod: GET 00:00:07.670 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.670 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.675 Response Code: HTTP/1.1 200 OK 00:00:07.675 Success: Status code 200 is in the accepted range: 200,404 00:00:07.675 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.474 [Pipeline] sh 00:00:10.761 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:10.780 [Pipeline] httpRequest 00:00:10.808 [Pipeline] echo 00:00:10.810 Sorcerer 10.211.164.101 is alive 00:00:10.820 [Pipeline] httpRequest 00:00:10.828 HttpMethod: GET 00:00:10.828 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.829 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.851 Response Code: HTTP/1.1 200 OK 00:00:10.851 Success: Status code 200 is in the accepted range: 200,404 00:00:10.852 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:09.836 [Pipeline] sh 00:01:10.122 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:12.672 [Pipeline] sh 00:01:12.981 + git -C spdk log --oneline -n5 00:01:12.981 dbef7efac test: fix dpdk builds on ubuntu24 00:01:12.981 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:12.981 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:12.981 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:12.981 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:13.049 [Pipeline] writeFile 00:01:13.065 [Pipeline] sh 00:01:13.348 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:13.359 [Pipeline] sh 00:01:13.641 + cat autorun-spdk.conf 00:01:13.641 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.641 SPDK_TEST_NVME=1 00:01:13.641 SPDK_TEST_FTL=1 00:01:13.641 SPDK_TEST_ISAL=1 00:01:13.641 SPDK_RUN_ASAN=1 00:01:13.641 SPDK_RUN_UBSAN=1 00:01:13.641 SPDK_TEST_XNVME=1 00:01:13.641 SPDK_TEST_NVME_FDP=1 00:01:13.641 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.648 RUN_NIGHTLY=1 00:01:13.650 [Pipeline] } 00:01:13.666 [Pipeline] // stage 00:01:13.681 [Pipeline] stage 00:01:13.683 [Pipeline] { (Run VM) 00:01:13.698 [Pipeline] sh 00:01:13.980 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.980 + echo 'Start stage prepare_nvme.sh' 00:01:13.980 Start stage prepare_nvme.sh 00:01:13.980 + [[ -n 1 ]] 00:01:13.980 + disk_prefix=ex1 00:01:13.980 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:13.980 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:13.980 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:13.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.980 ++ SPDK_TEST_NVME=1 00:01:13.980 ++ SPDK_TEST_FTL=1 00:01:13.980 ++ SPDK_TEST_ISAL=1 00:01:13.980 ++ SPDK_RUN_ASAN=1 00:01:13.980 ++ SPDK_RUN_UBSAN=1 00:01:13.980 ++ SPDK_TEST_XNVME=1 00:01:13.980 ++ SPDK_TEST_NVME_FDP=1 00:01:13.980 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.980 ++ RUN_NIGHTLY=1 00:01:13.980 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:13.980 + nvme_files=() 00:01:13.980 + declare -A nvme_files 00:01:13.980 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.980 + nvme_files['nvme.img']=5G 00:01:13.980 + nvme_files['nvme-cmb.img']=5G 00:01:13.980 + nvme_files['nvme-multi0.img']=4G 00:01:13.980 + nvme_files['nvme-multi1.img']=4G 00:01:13.980 + nvme_files['nvme-multi2.img']=4G 00:01:13.980 + nvme_files['nvme-openstack.img']=8G 00:01:13.980 + nvme_files['nvme-zns.img']=5G 00:01:13.980 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.980 + (( SPDK_TEST_FTL == 1 )) 00:01:13.980 + nvme_files["nvme-ftl.img"]=6G 00:01:13.980 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.980 + nvme_files["nvme-fdp.img"]=1G 00:01:13.980 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.980 + for nvme in "${!nvme_files[@]}" 00:01:13.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:13.980 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.980 + for nvme in "${!nvme_files[@]}" 00:01:13.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:13.980 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:13.980 + for nvme in "${!nvme_files[@]}" 00:01:13.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:14.239 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.239 + for nvme in "${!nvme_files[@]}" 00:01:14.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:14.497 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:14.497 + for nvme in "${!nvme_files[@]}" 00:01:14.497 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:14.497 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.497 + for nvme in "${!nvme_files[@]}" 00:01:14.497 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:14.497 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.497 + for nvme in "${!nvme_files[@]}" 00:01:14.497 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:14.756 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.756 + for nvme in "${!nvme_files[@]}" 00:01:14.756 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:14.756 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:14.756 + for nvme in "${!nvme_files[@]}" 00:01:14.756 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:15.014 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.273 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:15.273 + echo 'End stage prepare_nvme.sh' 00:01:15.273 End stage prepare_nvme.sh 00:01:15.285 [Pipeline] sh 00:01:15.568 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:15.568 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:15.568 00:01:15.568 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:15.568 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:15.568 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:15.568 HELP=0 00:01:15.568 DRY_RUN=0 00:01:15.568 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:15.568 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:15.568 NVME_AUTO_CREATE=0 00:01:15.568 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:15.568 NVME_CMB=,,,, 00:01:15.568 NVME_PMR=,,,, 00:01:15.568 NVME_ZNS=,,,, 00:01:15.568 NVME_MS=true,,,, 00:01:15.568 NVME_FDP=,,,on, 00:01:15.568 SPDK_VAGRANT_DISTRO=fedora38 00:01:15.568 SPDK_VAGRANT_VMCPU=10 00:01:15.568 SPDK_VAGRANT_VMRAM=12288 00:01:15.568 SPDK_VAGRANT_PROVIDER=libvirt 00:01:15.568 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:15.568 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:15.568 SPDK_OPENSTACK_NETWORK=0 00:01:15.568 VAGRANT_PACKAGE_BOX=0 00:01:15.568 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:15.568 FORCE_DISTRO=true 00:01:15.568 VAGRANT_BOX_VERSION= 00:01:15.568 EXTRA_VAGRANTFILES= 00:01:15.568 NIC_MODEL=e1000 00:01:15.568 00:01:15.568 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:15.568 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:18.101 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.668 ==> default: Creating image (snapshot of base box volume). 00:01:18.927 ==> default: Creating domain with the following settings... 00:01:18.927 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721969917_0d89cb82e752d58a07fa 00:01:18.927 ==> default: -- Domain type: kvm 00:01:18.927 ==> default: -- Cpus: 10 00:01:18.927 ==> default: -- Feature: acpi 00:01:18.927 ==> default: -- Feature: apic 00:01:18.927 ==> default: -- Feature: pae 00:01:18.927 ==> default: -- Memory: 12288M 00:01:18.927 ==> default: -- Memory Backing: hugepages: 00:01:18.927 ==> default: -- Management MAC: 00:01:18.927 ==> default: -- Loader: 00:01:18.927 ==> default: -- Nvram: 00:01:18.928 ==> default: -- Base box: spdk/fedora38 00:01:18.928 ==> default: -- Storage pool: default 00:01:18.928 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721969917_0d89cb82e752d58a07fa.img (20G) 00:01:18.928 ==> default: -- Volume Cache: default 00:01:18.928 ==> default: -- Kernel: 00:01:18.928 ==> default: -- Initrd: 00:01:18.928 ==> default: -- Graphics Type: vnc 00:01:18.928 ==> default: -- Graphics Port: -1 00:01:18.928 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.928 ==> default: -- Graphics Password: Not defined 00:01:18.928 ==> default: -- Video Type: cirrus 00:01:18.928 ==> default: -- Video VRAM: 9216 00:01:18.928 ==> default: -- Sound Type: 00:01:18.928 ==> default: -- Keymap: en-us 00:01:18.928 ==> default: -- TPM Path: 00:01:18.928 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.928 ==> default: -- Command line args: 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:18.928 ==> default: -> value=-drive, 00:01:18.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:18.928 ==> default: -> value=-device, 00:01:18.928 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.928 ==> default: Creating shared folders metadata... 00:01:18.928 ==> default: Starting domain. 00:01:20.832 ==> default: Waiting for domain to get an IP address... 00:01:38.917 ==> default: Waiting for SSH to become available... 00:01:40.416 ==> default: Configuring and enabling network interfaces... 00:01:45.690 default: SSH address: 192.168.121.138:22 00:01:45.690 default: SSH username: vagrant 00:01:45.690 default: SSH auth method: private key 00:01:47.597 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.716 ==> default: Mounting SSHFS shared folder... 00:01:57.618 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.618 ==> default: Checking Mount.. 00:01:59.523 ==> default: Folder Successfully Mounted! 00:01:59.523 ==> default: Running provisioner: file... 00:02:00.091 default: ~/.gitconfig => .gitconfig 00:02:00.661 00:02:00.661 SUCCESS! 00:02:00.661 00:02:00.661 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:00.661 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.661 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:00.661 00:02:00.670 [Pipeline] } 00:02:00.688 [Pipeline] // stage 00:02:00.698 [Pipeline] dir 00:02:00.699 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:00.701 [Pipeline] { 00:02:00.715 [Pipeline] catchError 00:02:00.717 [Pipeline] { 00:02:00.731 [Pipeline] sh 00:02:01.010 + vagrant+ ssh-config --host vagrant 00:02:01.010 sed -ne /^Host/,$p 00:02:01.010 + tee ssh_conf 00:02:04.301 Host vagrant 00:02:04.301 HostName 192.168.121.138 00:02:04.301 User vagrant 00:02:04.301 Port 22 00:02:04.301 UserKnownHostsFile /dev/null 00:02:04.301 StrictHostKeyChecking no 00:02:04.301 PasswordAuthentication no 00:02:04.301 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:04.301 IdentitiesOnly yes 00:02:04.301 LogLevel FATAL 00:02:04.301 ForwardAgent yes 00:02:04.301 ForwardX11 yes 00:02:04.301 00:02:04.315 [Pipeline] withEnv 00:02:04.318 [Pipeline] { 00:02:04.333 [Pipeline] sh 00:02:04.622 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.622 source /etc/os-release 00:02:04.622 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.622 # Minimal, systemd-like check. 00:02:04.622 if [[ -e /.dockerenv ]]; then 00:02:04.622 # Clear garbage from the node's name: 00:02:04.622 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.622 # $HOSTNAME is the actual container id 00:02:04.622 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.622 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.622 # We can assume this is a mount from a host where container is running, 00:02:04.622 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.622 container="$(< /etc/hostname) ($agent)" 00:02:04.622 else 00:02:04.622 # Fallback 00:02:04.622 container=$agent 00:02:04.622 fi 00:02:04.622 fi 00:02:04.622 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.622 00:02:04.901 [Pipeline] } 00:02:04.918 [Pipeline] // withEnv 00:02:04.926 [Pipeline] setCustomBuildProperty 00:02:04.938 [Pipeline] stage 00:02:04.940 [Pipeline] { (Tests) 00:02:04.957 [Pipeline] sh 00:02:05.235 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.507 [Pipeline] sh 00:02:05.786 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.059 [Pipeline] timeout 00:02:06.059 Timeout set to expire in 40 min 00:02:06.061 [Pipeline] { 00:02:06.075 [Pipeline] sh 00:02:06.356 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.924 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:02:06.936 [Pipeline] sh 00:02:07.217 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.490 [Pipeline] sh 00:02:07.772 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.049 [Pipeline] sh 00:02:08.332 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:08.592 ++ readlink -f spdk_repo 00:02:08.592 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.592 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.592 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.592 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.592 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.592 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.592 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.592 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:08.592 + cd /home/vagrant/spdk_repo 00:02:08.592 + source /etc/os-release 00:02:08.592 ++ NAME='Fedora Linux' 00:02:08.592 ++ VERSION='38 (Cloud Edition)' 00:02:08.592 ++ ID=fedora 00:02:08.592 ++ VERSION_ID=38 00:02:08.592 ++ VERSION_CODENAME= 00:02:08.592 ++ PLATFORM_ID=platform:f38 00:02:08.592 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:08.592 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.592 ++ LOGO=fedora-logo-icon 00:02:08.592 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:08.592 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.592 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:08.592 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.592 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.592 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.592 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:08.592 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.592 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:08.592 ++ SUPPORT_END=2024-05-14 00:02:08.592 ++ VARIANT='Cloud Edition' 00:02:08.592 ++ VARIANT_ID=cloud 00:02:08.592 + uname -a 00:02:08.592 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:08.592 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.592 Hugepages 00:02:08.592 node hugesize free / total 00:02:08.592 node0 1048576kB 0 / 0 00:02:08.592 node0 2048kB 0 / 0 00:02:08.592 00:02:08.592 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.851 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.851 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.851 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:08.851 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:08.851 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:08.851 + rm -f /tmp/spdk-ld-path 00:02:08.851 + source autorun-spdk.conf 00:02:08.851 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.851 ++ SPDK_TEST_NVME=1 00:02:08.851 ++ SPDK_TEST_FTL=1 00:02:08.851 ++ SPDK_TEST_ISAL=1 00:02:08.851 ++ SPDK_RUN_ASAN=1 00:02:08.851 ++ SPDK_RUN_UBSAN=1 00:02:08.851 ++ SPDK_TEST_XNVME=1 00:02:08.851 ++ SPDK_TEST_NVME_FDP=1 00:02:08.851 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.851 ++ RUN_NIGHTLY=1 00:02:08.851 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.851 + [[ -n '' ]] 00:02:08.851 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.851 + for M in /var/spdk/build-*-manifest.txt 00:02:08.851 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.851 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.851 + for M in /var/spdk/build-*-manifest.txt 00:02:08.851 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.851 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.851 ++ uname 00:02:08.851 + [[ Linux == \L\i\n\u\x ]] 00:02:08.851 + sudo dmesg -T 00:02:08.851 + sudo dmesg --clear 00:02:08.851 + dmesg_pid=5161 00:02:08.851 + sudo dmesg -Tw 00:02:08.851 + [[ Fedora Linux == FreeBSD ]] 00:02:08.851 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.851 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.851 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.851 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.851 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.851 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.851 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.851 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.851 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.851 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.851 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.851 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.851 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.851 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.852 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.852 Test configuration: 00:02:08.852 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.852 SPDK_TEST_NVME=1 00:02:08.852 SPDK_TEST_FTL=1 00:02:08.852 SPDK_TEST_ISAL=1 00:02:08.852 SPDK_RUN_ASAN=1 00:02:08.852 SPDK_RUN_UBSAN=1 00:02:08.852 SPDK_TEST_XNVME=1 00:02:08.852 SPDK_TEST_NVME_FDP=1 00:02:08.852 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.111 RUN_NIGHTLY=1 04:59:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.111 04:59:27 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.111 04:59:27 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.111 04:59:27 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.111 04:59:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.111 04:59:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.111 04:59:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.111 04:59:27 -- paths/export.sh@5 -- $ export PATH 00:02:09.111 04:59:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.111 04:59:27 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.111 04:59:27 -- common/autobuild_common.sh@438 -- $ date +%s 00:02:09.111 04:59:27 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721969967.XXXXXX 00:02:09.111 04:59:28 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721969967.YT9OWi 00:02:09.111 04:59:28 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:02:09.111 04:59:28 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:02:09.111 04:59:28 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.111 04:59:28 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.111 04:59:28 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.111 04:59:28 -- common/autobuild_common.sh@454 -- $ get_config_params 00:02:09.111 04:59:28 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:09.111 04:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.111 04:59:28 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:09.111 04:59:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.111 04:59:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.111 04:59:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:09.111 04:59:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.111 Fri Jul 26 04:59:28 AM UTC 2024 00:02:09.111 04:59:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.111 LTS-60-gdbef7efac 00:02:09.111 04:59:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:09.112 04:59:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:09.112 04:59:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:09.112 04:59:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.112 04:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.112 ************************************ 00:02:09.112 START TEST asan 00:02:09.112 ************************************ 00:02:09.112 using asan 00:02:09.112 04:59:28 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:02:09.112 00:02:09.112 real 0m0.001s 00:02:09.112 user 0m0.000s 00:02:09.112 sys 0m0.000s 00:02:09.112 04:59:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.112 04:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.112 ************************************ 00:02:09.112 END TEST asan 00:02:09.112 ************************************ 00:02:09.112 04:59:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.112 04:59:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.112 04:59:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:09.112 04:59:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:09.112 04:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.112 ************************************ 00:02:09.112 START TEST ubsan 00:02:09.112 ************************************ 00:02:09.112 using ubsan 00:02:09.112 04:59:28 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:09.112 00:02:09.112 real 0m0.000s 00:02:09.112 user 0m0.000s 00:02:09.112 sys 0m0.000s 00:02:09.112 04:59:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:09.112 04:59:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.112 ************************************ 00:02:09.112 END TEST ubsan 00:02:09.112 ************************************ 00:02:09.112 04:59:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.112 04:59:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.112 04:59:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.112 04:59:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:09.384 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.384 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.643 Using 'verbs' RDMA provider 00:02:25.465 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:40.340 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:40.340 Creating mk/config.mk...done. 00:02:40.340 Creating mk/cc.flags.mk...done. 00:02:40.340 Type 'make' to build. 00:02:40.340 04:59:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:40.340 04:59:58 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:40.340 04:59:58 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:40.340 04:59:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.340 ************************************ 00:02:40.340 START TEST make 00:02:40.340 ************************************ 00:02:40.340 04:59:58 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:40.340 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:40.340 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:40.340 meson setup builddir \ 00:02:40.340 -Dwith-libaio=enabled \ 00:02:40.340 -Dwith-liburing=enabled \ 00:02:40.340 -Dwith-libvfn=disabled \ 00:02:40.340 -Dwith-spdk=false && \ 00:02:40.340 meson compile -C builddir && \ 00:02:40.340 cd -) 00:02:40.341 make[1]: Nothing to be done for 'all'. 00:02:42.244 The Meson build system 00:02:42.244 Version: 1.3.1 00:02:42.244 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:42.244 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:42.244 Build type: native build 00:02:42.244 Project name: xnvme 00:02:42.244 Project version: 0.7.3 00:02:42.244 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:42.244 C linker for the host machine: cc ld.bfd 2.39-16 00:02:42.244 Host machine cpu family: x86_64 00:02:42.244 Host machine cpu: x86_64 00:02:42.244 Message: host_machine.system: linux 00:02:42.244 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:42.244 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:42.244 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:42.244 Run-time dependency threads found: YES 00:02:42.244 Has header "setupapi.h" : NO 00:02:42.244 Has header "linux/blkzoned.h" : YES 00:02:42.244 Has header "linux/blkzoned.h" : YES (cached) 00:02:42.244 Has header "libaio.h" : YES 00:02:42.244 Library aio found: YES 00:02:42.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:42.244 Run-time dependency liburing found: YES 2.2 00:02:42.244 Dependency libvfn skipped: feature with-libvfn disabled 00:02:42.244 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.244 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.244 Configuring xnvme_config.h using configuration 00:02:42.244 Configuring xnvme.spec using configuration 00:02:42.244 Run-time dependency bash-completion found: YES 2.11 00:02:42.244 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:42.244 Program cp found: YES (/usr/bin/cp) 00:02:42.244 Has header "winsock2.h" : NO 00:02:42.244 Has header "dbghelp.h" : NO 00:02:42.244 Library rpcrt4 found: NO 00:02:42.244 Library rt found: YES 00:02:42.244 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:42.244 Found CMake: /usr/bin/cmake (3.27.7) 00:02:42.244 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:42.244 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:42.244 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:42.244 Build targets in project: 32 00:02:42.244 00:02:42.244 xnvme 0.7.3 00:02:42.244 00:02:42.244 User defined options 00:02:42.244 with-libaio : enabled 00:02:42.244 with-liburing: enabled 00:02:42.244 with-libvfn : disabled 00:02:42.244 with-spdk : false 00:02:42.244 00:02:42.244 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.503 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:42.503 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:42.503 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:42.503 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:42.503 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:42.503 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:42.503 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:42.503 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:42.761 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:42.761 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:42.761 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:42.761 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:42.761 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:42.761 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:42.761 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:42.761 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:42.761 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:42.761 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:42.761 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:42.761 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:42.761 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:42.761 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:42.761 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:43.019 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:43.019 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:43.019 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:43.019 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:43.019 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:43.019 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:43.019 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:43.019 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:43.019 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:43.019 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:43.019 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:43.019 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:43.019 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:43.019 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:43.019 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:43.019 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:43.019 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:43.019 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:43.019 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:43.019 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:43.019 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:43.019 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:43.019 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:43.019 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:43.019 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:43.019 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:43.019 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:43.019 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:43.019 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:43.019 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:43.019 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:43.289 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:43.289 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:43.289 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:43.289 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:43.289 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:43.289 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:43.289 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:43.289 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:43.289 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:43.289 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:43.289 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:43.289 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:43.289 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:43.289 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:43.289 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:43.289 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:43.289 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:43.289 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:43.556 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:43.556 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:43.556 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:43.556 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:43.556 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:43.556 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:43.556 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:43.556 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:43.556 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:43.556 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:43.556 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:43.556 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:43.556 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:43.556 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:43.556 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:43.556 [87/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:43.556 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:43.815 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:43.815 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:43.815 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:43.815 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:43.815 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:43.815 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:43.815 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:43.815 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:43.815 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:43.815 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:43.815 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:43.815 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:43.815 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:43.815 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:43.815 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:43.815 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:43.815 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:43.815 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:43.815 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:43.815 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:43.815 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:43.815 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:43.815 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:43.815 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:43.815 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:43.815 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:43.815 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:43.815 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:43.815 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:43.815 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:43.815 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:43.815 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:43.815 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:43.815 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:44.073 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:44.073 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:44.073 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:44.073 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:44.073 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:44.073 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:44.073 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:44.073 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:44.073 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:44.073 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:44.073 [133/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:44.073 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:44.073 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:44.073 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:44.073 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:44.073 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:44.073 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:44.073 [140/203] Linking target lib/libxnvme.so 00:02:44.073 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:44.332 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:44.332 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:44.332 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:44.332 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:44.332 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:44.332 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:44.332 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:44.332 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:44.332 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:44.332 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:44.332 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:44.332 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:44.591 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:44.591 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:44.591 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:44.591 [157/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:44.591 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:44.591 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:44.591 [160/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:44.591 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:44.591 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:44.591 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:44.591 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:44.591 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:44.591 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:44.591 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:44.591 [168/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:44.849 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:44.849 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:44.850 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:44.850 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:44.850 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:44.850 [174/203] Linking static target lib/libxnvme.a 00:02:45.108 [175/203] Linking target tests/xnvme_tests_cli 00:02:45.108 [176/203] Linking target tests/xnvme_tests_enum 00:02:45.108 [177/203] Linking target tests/xnvme_tests_scc 00:02:45.108 [178/203] Linking target tests/xnvme_tests_buf 00:02:45.108 [179/203] Linking target tests/xnvme_tests_lblk 00:02:45.108 [180/203] Linking target tests/xnvme_tests_async_intf 00:02:45.108 [181/203] Linking target tests/xnvme_tests_znd_state 00:02:45.108 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:02:45.108 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:45.108 [184/203] Linking target tests/xnvme_tests_znd_append 00:02:45.108 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:45.108 [186/203] Linking target tests/xnvme_tests_ioworker 00:02:45.108 [187/203] Linking target tests/xnvme_tests_kvs 00:02:45.108 [188/203] Linking target tests/xnvme_tests_map 00:02:45.108 [189/203] Linking target tools/lblk 00:02:45.108 [190/203] Linking target tools/xdd 00:02:45.108 [191/203] Linking target tools/xnvme 00:02:45.108 [192/203] Linking target examples/xnvme_enum 00:02:45.108 [193/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:45.108 [194/203] Linking target tools/xnvme_file 00:02:45.108 [195/203] Linking target tools/zoned 00:02:45.108 [196/203] Linking target examples/xnvme_dev 00:02:45.108 [197/203] Linking target examples/zoned_io_async 00:02:45.108 [198/203] Linking target tools/kvs 00:02:45.108 [199/203] Linking target examples/xnvme_hello 00:02:45.108 [200/203] Linking target examples/xnvme_single_async 00:02:45.108 [201/203] Linking target examples/xnvme_io_async 00:02:45.108 [202/203] Linking target examples/xnvme_single_sync 00:02:45.108 [203/203] Linking target examples/zoned_io_sync 00:02:45.108 INFO: autodetecting backend as ninja 00:02:45.108 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.366 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:51.927 The Meson build system 00:02:51.927 Version: 1.3.1 00:02:51.927 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:51.927 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:51.927 Build type: native build 00:02:51.927 Program cat found: YES (/usr/bin/cat) 00:02:51.927 Project name: DPDK 00:02:51.927 Project version: 23.11.0 00:02:51.927 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:51.927 C linker for the host machine: cc ld.bfd 2.39-16 00:02:51.927 Host machine cpu family: x86_64 00:02:51.927 Host machine cpu: x86_64 00:02:51.927 Message: ## Building in Developer Mode ## 00:02:51.927 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.927 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:51.927 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.927 Program python3 found: YES (/usr/bin/python3) 00:02:51.927 Program cat found: YES (/usr/bin/cat) 00:02:51.927 Compiler for C supports arguments -march=native: YES 00:02:51.927 Checking for size of "void *" : 8 00:02:51.927 Checking for size of "void *" : 8 (cached) 00:02:51.927 Library m found: YES 00:02:51.927 Library numa found: YES 00:02:51.927 Has header "numaif.h" : YES 00:02:51.927 Library fdt found: NO 00:02:51.927 Library execinfo found: NO 00:02:51.927 Has header "execinfo.h" : YES 00:02:51.927 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:51.927 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.927 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.927 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.927 Run-time dependency openssl found: YES 3.0.9 00:02:51.927 Run-time dependency libpcap found: YES 1.10.4 00:02:51.927 Has header "pcap.h" with dependency libpcap: YES 00:02:51.927 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.927 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.927 Compiler for C supports arguments -Wformat: YES 00:02:51.927 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.927 Compiler for C supports arguments -Wformat-security: NO 00:02:51.927 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.927 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.927 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.927 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.927 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.927 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.927 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.927 Compiler for C supports arguments -Wundef: YES 00:02:51.927 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.927 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.927 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.927 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.927 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.927 Program objdump found: YES (/usr/bin/objdump) 00:02:51.927 Compiler for C supports arguments -mavx512f: YES 00:02:51.927 Checking if "AVX512 checking" compiles: YES 00:02:51.927 Fetching value of define "__SSE4_2__" : 1 00:02:51.927 Fetching value of define "__AES__" : 1 00:02:51.927 Fetching value of define "__AVX__" : 1 00:02:51.927 Fetching value of define "__AVX2__" : 1 00:02:51.927 Fetching value of define "__AVX512BW__" : 1 00:02:51.927 Fetching value of define "__AVX512CD__" : 1 00:02:51.927 Fetching value of define "__AVX512DQ__" : 1 00:02:51.927 Fetching value of define "__AVX512F__" : 1 00:02:51.927 Fetching value of define "__AVX512VL__" : 1 00:02:51.927 Fetching value of define "__PCLMUL__" : 1 00:02:51.927 Fetching value of define "__RDRND__" : 1 00:02:51.927 Fetching value of define "__RDSEED__" : 1 00:02:51.928 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:51.928 Fetching value of define "__znver1__" : (undefined) 00:02:51.928 Fetching value of define "__znver2__" : (undefined) 00:02:51.928 Fetching value of define "__znver3__" : (undefined) 00:02:51.928 Fetching value of define "__znver4__" : (undefined) 00:02:51.928 Library asan found: YES 00:02:51.928 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.928 Message: lib/log: Defining dependency "log" 00:02:51.928 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.928 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.928 Library rt found: YES 00:02:51.928 Checking for function "getentropy" : NO 00:02:51.928 Message: lib/eal: Defining dependency "eal" 00:02:51.928 Message: lib/ring: Defining dependency "ring" 00:02:51.928 Message: lib/rcu: Defining dependency "rcu" 00:02:51.928 Message: lib/mempool: Defining dependency "mempool" 00:02:51.928 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.928 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.928 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.928 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.928 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.928 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.928 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:51.928 Compiler for C supports arguments -mpclmul: YES 00:02:51.928 Compiler for C supports arguments -maes: YES 00:02:51.928 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.928 Compiler for C supports arguments -mavx512bw: YES 00:02:51.928 Compiler for C supports arguments -mavx512dq: YES 00:02:51.928 Compiler for C supports arguments -mavx512vl: YES 00:02:51.928 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.928 Compiler for C supports arguments -mavx2: YES 00:02:51.928 Compiler for C supports arguments -mavx: YES 00:02:51.928 Message: lib/net: Defining dependency "net" 00:02:51.928 Message: lib/meter: Defining dependency "meter" 00:02:51.928 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.928 Message: lib/pci: Defining dependency "pci" 00:02:51.928 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.928 Message: lib/hash: Defining dependency "hash" 00:02:51.928 Message: lib/timer: Defining dependency "timer" 00:02:51.928 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.928 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.928 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.928 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.928 Message: lib/power: Defining dependency "power" 00:02:51.928 Message: lib/reorder: Defining dependency "reorder" 00:02:51.928 Message: lib/security: Defining dependency "security" 00:02:51.928 Has header "linux/userfaultfd.h" : YES 00:02:51.928 Has header "linux/vduse.h" : YES 00:02:51.928 Message: lib/vhost: Defining dependency "vhost" 00:02:51.928 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:51.928 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:51.928 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:51.928 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:51.928 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:51.928 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:51.928 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:51.928 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:51.928 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:51.928 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:51.928 Program doxygen found: YES (/usr/bin/doxygen) 00:02:51.928 Configuring doxy-api-html.conf using configuration 00:02:51.928 Configuring doxy-api-man.conf using configuration 00:02:51.928 Program mandb found: YES (/usr/bin/mandb) 00:02:51.928 Program sphinx-build found: NO 00:02:51.928 Configuring rte_build_config.h using configuration 00:02:51.928 Message: 00:02:51.928 ================= 00:02:51.928 Applications Enabled 00:02:51.928 ================= 00:02:51.928 00:02:51.928 apps: 00:02:51.928 00:02:51.928 00:02:51.928 Message: 00:02:51.928 ================= 00:02:51.928 Libraries Enabled 00:02:51.928 ================= 00:02:51.928 00:02:51.928 libs: 00:02:51.928 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:51.928 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:51.928 cryptodev, dmadev, power, reorder, security, vhost, 00:02:51.928 00:02:51.928 Message: 00:02:51.928 =============== 00:02:51.928 Drivers Enabled 00:02:51.928 =============== 00:02:51.928 00:02:51.928 common: 00:02:51.928 00:02:51.928 bus: 00:02:51.928 pci, vdev, 00:02:51.928 mempool: 00:02:51.928 ring, 00:02:51.928 dma: 00:02:51.928 00:02:51.928 net: 00:02:51.928 00:02:51.928 crypto: 00:02:51.928 00:02:51.928 compress: 00:02:51.928 00:02:51.928 vdpa: 00:02:51.928 00:02:51.928 00:02:51.928 Message: 00:02:51.928 ================= 00:02:51.928 Content Skipped 00:02:51.928 ================= 00:02:51.928 00:02:51.928 apps: 00:02:51.928 dumpcap: explicitly disabled via build config 00:02:51.928 graph: explicitly disabled via build config 00:02:51.928 pdump: explicitly disabled via build config 00:02:51.928 proc-info: explicitly disabled via build config 00:02:51.928 test-acl: explicitly disabled via build config 00:02:51.928 test-bbdev: explicitly disabled via build config 00:02:51.928 test-cmdline: explicitly disabled via build config 00:02:51.928 test-compress-perf: explicitly disabled via build config 00:02:51.928 test-crypto-perf: explicitly disabled via build config 00:02:51.928 test-dma-perf: explicitly disabled via build config 00:02:51.928 test-eventdev: explicitly disabled via build config 00:02:51.928 test-fib: explicitly disabled via build config 00:02:51.928 test-flow-perf: explicitly disabled via build config 00:02:51.928 test-gpudev: explicitly disabled via build config 00:02:51.928 test-mldev: explicitly disabled via build config 00:02:51.928 test-pipeline: explicitly disabled via build config 00:02:51.928 test-pmd: explicitly disabled via build config 00:02:51.928 test-regex: explicitly disabled via build config 00:02:51.928 test-sad: explicitly disabled via build config 00:02:51.928 test-security-perf: explicitly disabled via build config 00:02:51.928 00:02:51.928 libs: 00:02:51.928 metrics: explicitly disabled via build config 00:02:51.928 acl: explicitly disabled via build config 00:02:51.928 bbdev: explicitly disabled via build config 00:02:51.928 bitratestats: explicitly disabled via build config 00:02:51.928 bpf: explicitly disabled via build config 00:02:51.928 cfgfile: explicitly disabled via build config 00:02:51.928 distributor: explicitly disabled via build config 00:02:51.928 efd: explicitly disabled via build config 00:02:51.928 eventdev: explicitly disabled via build config 00:02:51.928 dispatcher: explicitly disabled via build config 00:02:51.928 gpudev: explicitly disabled via build config 00:02:51.928 gro: explicitly disabled via build config 00:02:51.928 gso: explicitly disabled via build config 00:02:51.928 ip_frag: explicitly disabled via build config 00:02:51.928 jobstats: explicitly disabled via build config 00:02:51.928 latencystats: explicitly disabled via build config 00:02:51.928 lpm: explicitly disabled via build config 00:02:51.928 member: explicitly disabled via build config 00:02:51.928 pcapng: explicitly disabled via build config 00:02:51.928 rawdev: explicitly disabled via build config 00:02:51.928 regexdev: explicitly disabled via build config 00:02:51.928 mldev: explicitly disabled via build config 00:02:51.928 rib: explicitly disabled via build config 00:02:51.928 sched: explicitly disabled via build config 00:02:51.928 stack: explicitly disabled via build config 00:02:51.928 ipsec: explicitly disabled via build config 00:02:51.928 pdcp: explicitly disabled via build config 00:02:51.928 fib: explicitly disabled via build config 00:02:51.928 port: explicitly disabled via build config 00:02:51.928 pdump: explicitly disabled via build config 00:02:51.928 table: explicitly disabled via build config 00:02:51.928 pipeline: explicitly disabled via build config 00:02:51.928 graph: explicitly disabled via build config 00:02:51.928 node: explicitly disabled via build config 00:02:51.928 00:02:51.928 drivers: 00:02:51.928 common/cpt: not in enabled drivers build config 00:02:51.928 common/dpaax: not in enabled drivers build config 00:02:51.928 common/iavf: not in enabled drivers build config 00:02:51.928 common/idpf: not in enabled drivers build config 00:02:51.928 common/mvep: not in enabled drivers build config 00:02:51.928 common/octeontx: not in enabled drivers build config 00:02:51.928 bus/auxiliary: not in enabled drivers build config 00:02:51.928 bus/cdx: not in enabled drivers build config 00:02:51.928 bus/dpaa: not in enabled drivers build config 00:02:51.928 bus/fslmc: not in enabled drivers build config 00:02:51.928 bus/ifpga: not in enabled drivers build config 00:02:51.928 bus/platform: not in enabled drivers build config 00:02:51.928 bus/vmbus: not in enabled drivers build config 00:02:51.928 common/cnxk: not in enabled drivers build config 00:02:51.928 common/mlx5: not in enabled drivers build config 00:02:51.928 common/nfp: not in enabled drivers build config 00:02:51.928 common/qat: not in enabled drivers build config 00:02:51.928 common/sfc_efx: not in enabled drivers build config 00:02:51.928 mempool/bucket: not in enabled drivers build config 00:02:51.928 mempool/cnxk: not in enabled drivers build config 00:02:51.928 mempool/dpaa: not in enabled drivers build config 00:02:51.928 mempool/dpaa2: not in enabled drivers build config 00:02:51.929 mempool/octeontx: not in enabled drivers build config 00:02:51.929 mempool/stack: not in enabled drivers build config 00:02:51.929 dma/cnxk: not in enabled drivers build config 00:02:51.929 dma/dpaa: not in enabled drivers build config 00:02:51.929 dma/dpaa2: not in enabled drivers build config 00:02:51.929 dma/hisilicon: not in enabled drivers build config 00:02:51.929 dma/idxd: not in enabled drivers build config 00:02:51.929 dma/ioat: not in enabled drivers build config 00:02:51.929 dma/skeleton: not in enabled drivers build config 00:02:51.929 net/af_packet: not in enabled drivers build config 00:02:51.929 net/af_xdp: not in enabled drivers build config 00:02:51.929 net/ark: not in enabled drivers build config 00:02:51.929 net/atlantic: not in enabled drivers build config 00:02:51.929 net/avp: not in enabled drivers build config 00:02:51.929 net/axgbe: not in enabled drivers build config 00:02:51.929 net/bnx2x: not in enabled drivers build config 00:02:51.929 net/bnxt: not in enabled drivers build config 00:02:51.929 net/bonding: not in enabled drivers build config 00:02:51.929 net/cnxk: not in enabled drivers build config 00:02:51.929 net/cpfl: not in enabled drivers build config 00:02:51.929 net/cxgbe: not in enabled drivers build config 00:02:51.929 net/dpaa: not in enabled drivers build config 00:02:51.929 net/dpaa2: not in enabled drivers build config 00:02:51.929 net/e1000: not in enabled drivers build config 00:02:51.929 net/ena: not in enabled drivers build config 00:02:51.929 net/enetc: not in enabled drivers build config 00:02:51.929 net/enetfec: not in enabled drivers build config 00:02:51.929 net/enic: not in enabled drivers build config 00:02:51.929 net/failsafe: not in enabled drivers build config 00:02:51.929 net/fm10k: not in enabled drivers build config 00:02:51.929 net/gve: not in enabled drivers build config 00:02:51.929 net/hinic: not in enabled drivers build config 00:02:51.929 net/hns3: not in enabled drivers build config 00:02:51.929 net/i40e: not in enabled drivers build config 00:02:51.929 net/iavf: not in enabled drivers build config 00:02:51.929 net/ice: not in enabled drivers build config 00:02:51.929 net/idpf: not in enabled drivers build config 00:02:51.929 net/igc: not in enabled drivers build config 00:02:51.929 net/ionic: not in enabled drivers build config 00:02:51.929 net/ipn3ke: not in enabled drivers build config 00:02:51.929 net/ixgbe: not in enabled drivers build config 00:02:51.929 net/mana: not in enabled drivers build config 00:02:51.929 net/memif: not in enabled drivers build config 00:02:51.929 net/mlx4: not in enabled drivers build config 00:02:51.929 net/mlx5: not in enabled drivers build config 00:02:51.929 net/mvneta: not in enabled drivers build config 00:02:51.929 net/mvpp2: not in enabled drivers build config 00:02:51.929 net/netvsc: not in enabled drivers build config 00:02:51.929 net/nfb: not in enabled drivers build config 00:02:51.929 net/nfp: not in enabled drivers build config 00:02:51.929 net/ngbe: not in enabled drivers build config 00:02:51.929 net/null: not in enabled drivers build config 00:02:51.929 net/octeontx: not in enabled drivers build config 00:02:51.929 net/octeon_ep: not in enabled drivers build config 00:02:51.929 net/pcap: not in enabled drivers build config 00:02:51.929 net/pfe: not in enabled drivers build config 00:02:51.929 net/qede: not in enabled drivers build config 00:02:51.929 net/ring: not in enabled drivers build config 00:02:51.929 net/sfc: not in enabled drivers build config 00:02:51.929 net/softnic: not in enabled drivers build config 00:02:51.929 net/tap: not in enabled drivers build config 00:02:51.929 net/thunderx: not in enabled drivers build config 00:02:51.929 net/txgbe: not in enabled drivers build config 00:02:51.929 net/vdev_netvsc: not in enabled drivers build config 00:02:51.929 net/vhost: not in enabled drivers build config 00:02:51.929 net/virtio: not in enabled drivers build config 00:02:51.929 net/vmxnet3: not in enabled drivers build config 00:02:51.929 raw/*: missing internal dependency, "rawdev" 00:02:51.929 crypto/armv8: not in enabled drivers build config 00:02:51.929 crypto/bcmfs: not in enabled drivers build config 00:02:51.929 crypto/caam_jr: not in enabled drivers build config 00:02:51.929 crypto/ccp: not in enabled drivers build config 00:02:51.929 crypto/cnxk: not in enabled drivers build config 00:02:51.929 crypto/dpaa_sec: not in enabled drivers build config 00:02:51.929 crypto/dpaa2_sec: not in enabled drivers build config 00:02:51.929 crypto/ipsec_mb: not in enabled drivers build config 00:02:51.929 crypto/mlx5: not in enabled drivers build config 00:02:51.929 crypto/mvsam: not in enabled drivers build config 00:02:51.929 crypto/nitrox: not in enabled drivers build config 00:02:51.929 crypto/null: not in enabled drivers build config 00:02:51.929 crypto/octeontx: not in enabled drivers build config 00:02:51.929 crypto/openssl: not in enabled drivers build config 00:02:51.929 crypto/scheduler: not in enabled drivers build config 00:02:51.929 crypto/uadk: not in enabled drivers build config 00:02:51.929 crypto/virtio: not in enabled drivers build config 00:02:51.929 compress/isal: not in enabled drivers build config 00:02:51.929 compress/mlx5: not in enabled drivers build config 00:02:51.929 compress/octeontx: not in enabled drivers build config 00:02:51.929 compress/zlib: not in enabled drivers build config 00:02:51.929 regex/*: missing internal dependency, "regexdev" 00:02:51.929 ml/*: missing internal dependency, "mldev" 00:02:51.929 vdpa/ifc: not in enabled drivers build config 00:02:51.929 vdpa/mlx5: not in enabled drivers build config 00:02:51.929 vdpa/nfp: not in enabled drivers build config 00:02:51.929 vdpa/sfc: not in enabled drivers build config 00:02:51.929 event/*: missing internal dependency, "eventdev" 00:02:51.929 baseband/*: missing internal dependency, "bbdev" 00:02:51.929 gpu/*: missing internal dependency, "gpudev" 00:02:51.929 00:02:51.929 00:02:51.929 Build targets in project: 85 00:02:51.929 00:02:51.929 DPDK 23.11.0 00:02:51.929 00:02:51.929 User defined options 00:02:51.929 buildtype : debug 00:02:51.929 default_library : shared 00:02:51.929 libdir : lib 00:02:51.929 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:51.929 b_sanitize : address 00:02:51.929 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:51.929 c_link_args : 00:02:51.929 cpu_instruction_set: native 00:02:51.929 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:51.929 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:51.929 enable_docs : false 00:02:51.929 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:51.929 enable_kmods : false 00:02:51.929 tests : false 00:02:51.929 00:02:51.929 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.929 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:51.929 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:51.929 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:51.929 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:51.929 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:51.929 [5/265] Linking static target lib/librte_kvargs.a 00:02:51.929 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:51.929 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:51.929 [8/265] Linking static target lib/librte_log.a 00:02:51.929 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:51.929 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.187 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.445 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.445 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.445 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.445 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.445 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.445 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.445 [18/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.445 [19/265] Linking static target lib/librte_telemetry.a 00:02:52.703 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.962 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.962 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.962 [23/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.222 [24/265] Linking target lib/librte_log.so.24.0 00:02:53.222 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.222 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.222 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.222 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.222 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.222 [30/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:53.481 [31/265] Linking target lib/librte_kvargs.so.24.0 00:02:53.481 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.481 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.481 [34/265] Linking target lib/librte_telemetry.so.24.0 00:02:53.481 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.745 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.745 [37/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:53.745 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.745 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.745 [40/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:53.745 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.745 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.002 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.002 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.002 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.002 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.260 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.260 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.260 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.517 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.517 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.517 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.517 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.517 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.517 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.517 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.776 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.776 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.776 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.036 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.036 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.036 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.036 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.036 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.036 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.036 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.297 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.297 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.575 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:55.575 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.575 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.575 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.575 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.840 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.840 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.840 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.840 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.840 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.840 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.099 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:56.099 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.099 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.358 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.358 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:56.617 [85/265] Linking static target lib/librte_eal.a 00:02:56.617 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.617 [87/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.617 [88/265] Linking static target lib/librte_ring.a 00:02:56.617 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.617 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.876 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.876 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.876 [93/265] Linking static target lib/librte_mempool.a 00:02:56.876 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.135 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.135 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.135 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.135 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.135 [99/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.135 [100/265] Linking static target lib/librte_rcu.a 00:02:57.393 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.656 [102/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.656 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.656 [104/265] Linking static target lib/librte_meter.a 00:02:57.656 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.656 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.656 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.656 [108/265] Linking static target lib/librte_net.a 00:02:57.913 [109/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.913 [110/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.913 [111/265] Linking static target lib/librte_mbuf.a 00:02:58.172 [112/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.172 [113/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.172 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.431 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.431 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.431 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.431 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.690 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.949 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.949 [121/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.207 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.207 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.207 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.207 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.207 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:59.207 [127/265] Linking static target lib/librte_pci.a 00:02:59.207 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.207 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.466 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.466 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.466 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.725 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.725 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.725 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.725 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.725 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.725 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.725 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.725 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.725 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.725 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.985 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.985 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.985 [145/265] Linking static target lib/librte_cmdline.a 00:02:59.985 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.244 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.244 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.244 [149/265] Linking static target lib/librte_timer.a 00:03:00.502 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.502 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.502 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.760 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.760 [154/265] Linking static target lib/librte_compressdev.a 00:03:00.760 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.760 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:00.760 [157/265] Linking static target lib/librte_ethdev.a 00:03:00.760 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.019 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.019 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.019 [161/265] Linking static target lib/librte_hash.a 00:03:01.019 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.019 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.019 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.278 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.278 [166/265] Linking static target lib/librte_dmadev.a 00:03:01.278 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.278 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.536 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.536 [170/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.536 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.536 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.795 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.054 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.054 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.054 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.054 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.054 [178/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.054 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.054 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:02.054 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.054 [182/265] Linking static target lib/librte_cryptodev.a 00:03:02.314 [183/265] Linking static target lib/librte_power.a 00:03:02.314 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.573 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.573 [186/265] Linking static target lib/librte_reorder.a 00:03:02.573 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.832 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.091 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.091 [190/265] Linking static target lib/librte_security.a 00:03:03.091 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.091 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.351 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.610 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.610 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.610 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.610 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.870 [198/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.870 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.130 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.130 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.130 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.130 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.397 [204/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.397 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.397 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.397 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.397 [208/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.657 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.657 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.657 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.657 [212/265] Linking static target drivers/librte_bus_vdev.a 00:03:04.657 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.657 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.657 [215/265] Linking static target drivers/librte_bus_pci.a 00:03:04.657 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.657 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.657 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.916 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.916 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.916 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.916 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.916 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:05.176 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.552 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.456 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.456 [227/265] Linking target lib/librte_eal.so.24.0 00:03:08.456 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:08.456 [229/265] Linking target lib/librte_ring.so.24.0 00:03:08.457 [230/265] Linking target lib/librte_meter.so.24.0 00:03:08.457 [231/265] Linking target lib/librte_timer.so.24.0 00:03:08.457 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:08.457 [233/265] Linking target lib/librte_dmadev.so.24.0 00:03:08.457 [234/265] Linking target lib/librte_pci.so.24.0 00:03:08.457 [235/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.716 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:08.716 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:08.716 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:08.716 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:08.716 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:08.716 [241/265] Linking target lib/librte_rcu.so.24.0 00:03:08.716 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:08.716 [243/265] Linking target lib/librte_mempool.so.24.0 00:03:08.716 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:08.716 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:08.974 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:08.974 [247/265] Linking target lib/librte_mbuf.so.24.0 00:03:08.974 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:09.233 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:09.233 [250/265] Linking target lib/librte_net.so.24.0 00:03:09.233 [251/265] Linking target lib/librte_compressdev.so.24.0 00:03:09.233 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:03:09.233 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:09.233 [254/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:09.493 [255/265] Linking target lib/librte_security.so.24.0 00:03:09.493 [256/265] Linking target lib/librte_hash.so.24.0 00:03:09.493 [257/265] Linking target lib/librte_cmdline.so.24.0 00:03:09.493 [258/265] Linking target lib/librte_ethdev.so.24.0 00:03:09.493 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:09.493 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:09.754 [261/265] Linking target lib/librte_power.so.24.0 00:03:10.321 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:10.321 [263/265] Linking static target lib/librte_vhost.a 00:03:12.226 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.484 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:12.484 INFO: autodetecting backend as ninja 00:03:12.485 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.419 CC lib/ut/ut.o 00:03:13.419 CC lib/ut_mock/mock.o 00:03:13.419 CC lib/log/log.o 00:03:13.419 CC lib/log/log_flags.o 00:03:13.419 CC lib/log/log_deprecated.o 00:03:13.678 LIB libspdk_ut_mock.a 00:03:13.678 LIB libspdk_ut.a 00:03:13.678 LIB libspdk_log.a 00:03:13.678 SO libspdk_ut_mock.so.5.0 00:03:13.678 SO libspdk_ut.so.1.0 00:03:13.678 SO libspdk_log.so.6.1 00:03:13.678 SYMLINK libspdk_ut_mock.so 00:03:13.678 SYMLINK libspdk_ut.so 00:03:13.936 SYMLINK libspdk_log.so 00:03:13.936 CC lib/dma/dma.o 00:03:13.936 CC lib/util/base64.o 00:03:13.936 CC lib/util/bit_array.o 00:03:13.936 CC lib/util/cpuset.o 00:03:13.936 CC lib/util/crc32.o 00:03:13.937 CC lib/util/crc16.o 00:03:13.937 CC lib/util/crc32c.o 00:03:13.937 CXX lib/trace_parser/trace.o 00:03:13.937 CC lib/ioat/ioat.o 00:03:14.195 CC lib/vfio_user/host/vfio_user_pci.o 00:03:14.195 LIB libspdk_dma.a 00:03:14.195 CC lib/vfio_user/host/vfio_user.o 00:03:14.195 CC lib/util/crc32_ieee.o 00:03:14.195 CC lib/util/crc64.o 00:03:14.195 SO libspdk_dma.so.3.0 00:03:14.195 SYMLINK libspdk_dma.so 00:03:14.195 CC lib/util/dif.o 00:03:14.195 CC lib/util/fd.o 00:03:14.195 CC lib/util/file.o 00:03:14.195 CC lib/util/hexlify.o 00:03:14.453 CC lib/util/iov.o 00:03:14.453 CC lib/util/math.o 00:03:14.453 LIB libspdk_ioat.a 00:03:14.453 CC lib/util/pipe.o 00:03:14.453 CC lib/util/strerror_tls.o 00:03:14.453 SO libspdk_ioat.so.6.0 00:03:14.453 LIB libspdk_vfio_user.a 00:03:14.453 CC lib/util/string.o 00:03:14.453 CC lib/util/uuid.o 00:03:14.453 SYMLINK libspdk_ioat.so 00:03:14.453 SO libspdk_vfio_user.so.4.0 00:03:14.453 CC lib/util/fd_group.o 00:03:14.453 CC lib/util/xor.o 00:03:14.453 CC lib/util/zipf.o 00:03:14.453 SYMLINK libspdk_vfio_user.so 00:03:15.017 LIB libspdk_util.a 00:03:15.017 SO libspdk_util.so.8.0 00:03:15.017 LIB libspdk_trace_parser.a 00:03:15.017 SO libspdk_trace_parser.so.4.0 00:03:15.275 SYMLINK libspdk_util.so 00:03:15.275 SYMLINK libspdk_trace_parser.so 00:03:15.275 CC lib/conf/conf.o 00:03:15.275 CC lib/rdma/common.o 00:03:15.275 CC lib/idxd/idxd.o 00:03:15.275 CC lib/vmd/vmd.o 00:03:15.275 CC lib/idxd/idxd_user.o 00:03:15.275 CC lib/rdma/rdma_verbs.o 00:03:15.275 CC lib/env_dpdk/env.o 00:03:15.275 CC lib/idxd/idxd_kernel.o 00:03:15.275 CC lib/json/json_parse.o 00:03:15.275 CC lib/json/json_util.o 00:03:15.533 CC lib/vmd/led.o 00:03:15.533 CC lib/env_dpdk/memory.o 00:03:15.533 LIB libspdk_conf.a 00:03:15.533 CC lib/json/json_write.o 00:03:15.534 CC lib/env_dpdk/pci.o 00:03:15.534 SO libspdk_conf.so.5.0 00:03:15.534 LIB libspdk_rdma.a 00:03:15.534 SYMLINK libspdk_conf.so 00:03:15.534 CC lib/env_dpdk/init.o 00:03:15.534 CC lib/env_dpdk/threads.o 00:03:15.534 CC lib/env_dpdk/pci_ioat.o 00:03:15.534 SO libspdk_rdma.so.5.0 00:03:15.792 SYMLINK libspdk_rdma.so 00:03:15.792 CC lib/env_dpdk/pci_virtio.o 00:03:15.792 CC lib/env_dpdk/pci_vmd.o 00:03:15.792 CC lib/env_dpdk/pci_idxd.o 00:03:15.792 LIB libspdk_json.a 00:03:15.792 CC lib/env_dpdk/pci_event.o 00:03:15.792 CC lib/env_dpdk/sigbus_handler.o 00:03:15.792 SO libspdk_json.so.5.1 00:03:15.792 CC lib/env_dpdk/pci_dpdk.o 00:03:15.792 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.051 LIB libspdk_idxd.a 00:03:16.051 SYMLINK libspdk_json.so 00:03:16.051 SO libspdk_idxd.so.11.0 00:03:16.051 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.051 LIB libspdk_vmd.a 00:03:16.051 SYMLINK libspdk_idxd.so 00:03:16.051 SO libspdk_vmd.so.5.0 00:03:16.051 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.051 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.051 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.051 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.051 SYMLINK libspdk_vmd.so 00:03:16.309 LIB libspdk_jsonrpc.a 00:03:16.567 SO libspdk_jsonrpc.so.5.1 00:03:16.567 SYMLINK libspdk_jsonrpc.so 00:03:16.825 CC lib/rpc/rpc.o 00:03:17.083 LIB libspdk_rpc.a 00:03:17.083 LIB libspdk_env_dpdk.a 00:03:17.083 SO libspdk_rpc.so.5.0 00:03:17.083 SYMLINK libspdk_rpc.so 00:03:17.083 SO libspdk_env_dpdk.so.13.0 00:03:17.342 CC lib/trace/trace.o 00:03:17.342 CC lib/trace/trace_flags.o 00:03:17.342 CC lib/trace/trace_rpc.o 00:03:17.342 CC lib/notify/notify_rpc.o 00:03:17.342 CC lib/notify/notify.o 00:03:17.342 CC lib/sock/sock.o 00:03:17.342 CC lib/sock/sock_rpc.o 00:03:17.342 SYMLINK libspdk_env_dpdk.so 00:03:17.342 LIB libspdk_notify.a 00:03:17.342 SO libspdk_notify.so.5.0 00:03:17.342 LIB libspdk_trace.a 00:03:17.600 SYMLINK libspdk_notify.so 00:03:17.600 SO libspdk_trace.so.9.0 00:03:17.600 SYMLINK libspdk_trace.so 00:03:17.600 LIB libspdk_sock.a 00:03:17.600 SO libspdk_sock.so.8.0 00:03:17.858 SYMLINK libspdk_sock.so 00:03:17.858 CC lib/thread/thread.o 00:03:17.858 CC lib/thread/iobuf.o 00:03:18.116 CC lib/nvme/nvme_ctrlr.o 00:03:18.116 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.116 CC lib/nvme/nvme_fabric.o 00:03:18.116 CC lib/nvme/nvme_ns_cmd.o 00:03:18.116 CC lib/nvme/nvme_qpair.o 00:03:18.116 CC lib/nvme/nvme_pcie_common.o 00:03:18.116 CC lib/nvme/nvme_ns.o 00:03:18.116 CC lib/nvme/nvme_pcie.o 00:03:18.116 CC lib/nvme/nvme.o 00:03:18.684 CC lib/nvme/nvme_quirks.o 00:03:18.684 CC lib/nvme/nvme_transport.o 00:03:18.943 CC lib/nvme/nvme_discovery.o 00:03:18.943 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.943 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.943 CC lib/nvme/nvme_tcp.o 00:03:19.201 CC lib/nvme/nvme_opal.o 00:03:19.201 CC lib/nvme/nvme_io_msg.o 00:03:19.201 CC lib/nvme/nvme_poll_group.o 00:03:19.460 CC lib/nvme/nvme_zns.o 00:03:19.460 CC lib/nvme/nvme_cuse.o 00:03:19.460 CC lib/nvme/nvme_vfio_user.o 00:03:19.460 CC lib/nvme/nvme_rdma.o 00:03:19.719 LIB libspdk_thread.a 00:03:19.719 SO libspdk_thread.so.9.0 00:03:19.719 SYMLINK libspdk_thread.so 00:03:19.978 CC lib/accel/accel.o 00:03:19.978 CC lib/blob/blobstore.o 00:03:19.978 CC lib/init/json_config.o 00:03:19.978 CC lib/blob/request.o 00:03:19.978 CC lib/virtio/virtio.o 00:03:19.978 CC lib/blob/zeroes.o 00:03:20.237 CC lib/init/subsystem.o 00:03:20.237 CC lib/init/subsystem_rpc.o 00:03:20.237 CC lib/init/rpc.o 00:03:20.237 CC lib/accel/accel_rpc.o 00:03:20.496 CC lib/virtio/virtio_vhost_user.o 00:03:20.496 CC lib/virtio/virtio_vfio_user.o 00:03:20.496 CC lib/virtio/virtio_pci.o 00:03:20.496 LIB libspdk_init.a 00:03:20.496 SO libspdk_init.so.4.0 00:03:20.496 CC lib/blob/blob_bs_dev.o 00:03:20.496 CC lib/accel/accel_sw.o 00:03:20.496 SYMLINK libspdk_init.so 00:03:20.767 CC lib/event/app.o 00:03:20.767 CC lib/event/log_rpc.o 00:03:20.767 CC lib/event/reactor.o 00:03:20.767 CC lib/event/app_rpc.o 00:03:20.767 LIB libspdk_virtio.a 00:03:20.767 CC lib/event/scheduler_static.o 00:03:20.767 SO libspdk_virtio.so.6.0 00:03:20.767 SYMLINK libspdk_virtio.so 00:03:21.037 LIB libspdk_nvme.a 00:03:21.037 LIB libspdk_accel.a 00:03:21.296 LIB libspdk_event.a 00:03:21.296 SO libspdk_nvme.so.12.0 00:03:21.296 SO libspdk_accel.so.14.0 00:03:21.296 SO libspdk_event.so.12.0 00:03:21.296 SYMLINK libspdk_accel.so 00:03:21.296 SYMLINK libspdk_event.so 00:03:21.555 CC lib/bdev/bdev.o 00:03:21.555 CC lib/bdev/bdev_rpc.o 00:03:21.555 CC lib/bdev/bdev_zone.o 00:03:21.555 CC lib/bdev/part.o 00:03:21.555 CC lib/bdev/scsi_nvme.o 00:03:21.555 SYMLINK libspdk_nvme.so 00:03:23.457 LIB libspdk_blob.a 00:03:23.457 SO libspdk_blob.so.10.1 00:03:23.716 SYMLINK libspdk_blob.so 00:03:23.975 CC lib/blobfs/tree.o 00:03:23.975 CC lib/blobfs/blobfs.o 00:03:23.975 CC lib/lvol/lvol.o 00:03:24.540 LIB libspdk_bdev.a 00:03:24.540 SO libspdk_bdev.so.14.0 00:03:24.797 LIB libspdk_blobfs.a 00:03:24.797 SYMLINK libspdk_bdev.so 00:03:24.797 SO libspdk_blobfs.so.9.0 00:03:24.797 SYMLINK libspdk_blobfs.so 00:03:24.797 CC lib/scsi/dev.o 00:03:24.797 CC lib/scsi/lun.o 00:03:24.797 CC lib/nbd/nbd.o 00:03:24.797 CC lib/ftl/ftl_core.o 00:03:24.797 CC lib/ftl/ftl_init.o 00:03:24.797 CC lib/scsi/port.o 00:03:24.797 CC lib/nbd/nbd_rpc.o 00:03:24.797 CC lib/nvmf/ctrlr.o 00:03:24.797 CC lib/ublk/ublk.o 00:03:24.797 LIB libspdk_lvol.a 00:03:24.797 SO libspdk_lvol.so.9.1 00:03:25.055 SYMLINK libspdk_lvol.so 00:03:25.055 CC lib/scsi/scsi.o 00:03:25.055 CC lib/scsi/scsi_bdev.o 00:03:25.055 CC lib/ftl/ftl_layout.o 00:03:25.055 CC lib/ftl/ftl_debug.o 00:03:25.055 CC lib/ftl/ftl_io.o 00:03:25.055 CC lib/ublk/ublk_rpc.o 00:03:25.312 CC lib/ftl/ftl_sb.o 00:03:25.312 CC lib/ftl/ftl_l2p.o 00:03:25.312 CC lib/nvmf/ctrlr_discovery.o 00:03:25.312 CC lib/ftl/ftl_l2p_flat.o 00:03:25.312 CC lib/ftl/ftl_nv_cache.o 00:03:25.312 LIB libspdk_nbd.a 00:03:25.312 CC lib/ftl/ftl_band.o 00:03:25.312 CC lib/ftl/ftl_band_ops.o 00:03:25.312 SO libspdk_nbd.so.6.0 00:03:25.570 SYMLINK libspdk_nbd.so 00:03:25.570 CC lib/ftl/ftl_writer.o 00:03:25.570 CC lib/nvmf/ctrlr_bdev.o 00:03:25.570 CC lib/ftl/ftl_rq.o 00:03:25.570 LIB libspdk_ublk.a 00:03:25.570 CC lib/scsi/scsi_pr.o 00:03:25.570 SO libspdk_ublk.so.2.0 00:03:25.570 SYMLINK libspdk_ublk.so 00:03:25.570 CC lib/scsi/scsi_rpc.o 00:03:25.570 CC lib/nvmf/subsystem.o 00:03:25.827 CC lib/ftl/ftl_reloc.o 00:03:25.827 CC lib/ftl/ftl_l2p_cache.o 00:03:25.827 CC lib/ftl/ftl_p2l.o 00:03:25.827 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.827 CC lib/nvmf/nvmf.o 00:03:25.827 CC lib/scsi/task.o 00:03:26.084 LIB libspdk_scsi.a 00:03:26.084 CC lib/nvmf/nvmf_rpc.o 00:03:26.084 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.084 CC lib/nvmf/transport.o 00:03:26.084 SO libspdk_scsi.so.8.0 00:03:26.342 SYMLINK libspdk_scsi.so 00:03:26.342 CC lib/nvmf/tcp.o 00:03:26.342 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.342 CC lib/nvmf/rdma.o 00:03:26.342 CC lib/iscsi/conn.o 00:03:26.599 CC lib/vhost/vhost.o 00:03:26.599 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.599 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.856 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.857 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.857 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.857 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.857 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:27.115 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:27.115 CC lib/vhost/vhost_rpc.o 00:03:27.115 CC lib/iscsi/init_grp.o 00:03:27.115 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:27.115 CC lib/iscsi/iscsi.o 00:03:27.115 CC lib/vhost/vhost_scsi.o 00:03:27.115 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.372 CC lib/iscsi/md5.o 00:03:27.372 CC lib/ftl/utils/ftl_conf.o 00:03:27.372 CC lib/iscsi/param.o 00:03:27.372 CC lib/vhost/vhost_blk.o 00:03:27.372 CC lib/iscsi/portal_grp.o 00:03:27.372 CC lib/vhost/rte_vhost_user.o 00:03:27.372 CC lib/ftl/utils/ftl_md.o 00:03:27.630 CC lib/ftl/utils/ftl_mempool.o 00:03:27.630 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.630 CC lib/iscsi/tgt_node.o 00:03:27.888 CC lib/ftl/utils/ftl_property.o 00:03:27.888 CC lib/iscsi/iscsi_subsystem.o 00:03:27.888 CC lib/iscsi/iscsi_rpc.o 00:03:28.146 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.146 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.146 CC lib/iscsi/task.o 00:03:28.146 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.404 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.404 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.404 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.404 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.404 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.404 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.404 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.404 CC lib/ftl/base/ftl_base_dev.o 00:03:28.404 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.404 CC lib/ftl/ftl_trace.o 00:03:28.662 LIB libspdk_vhost.a 00:03:28.662 SO libspdk_vhost.so.7.1 00:03:28.662 LIB libspdk_iscsi.a 00:03:28.662 SYMLINK libspdk_vhost.so 00:03:28.662 LIB libspdk_ftl.a 00:03:28.920 SO libspdk_iscsi.so.7.0 00:03:28.920 SO libspdk_ftl.so.8.0 00:03:28.920 SYMLINK libspdk_iscsi.so 00:03:28.920 LIB libspdk_nvmf.a 00:03:29.179 SO libspdk_nvmf.so.17.0 00:03:29.437 SYMLINK libspdk_ftl.so 00:03:29.437 SYMLINK libspdk_nvmf.so 00:03:29.697 CC module/env_dpdk/env_dpdk_rpc.o 00:03:29.697 CC module/accel/dsa/accel_dsa.o 00:03:29.697 CC module/accel/ioat/accel_ioat.o 00:03:29.697 CC module/accel/iaa/accel_iaa.o 00:03:29.697 CC module/sock/posix/posix.o 00:03:29.697 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.697 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:29.697 CC module/accel/error/accel_error.o 00:03:29.697 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.697 CC module/blob/bdev/blob_bdev.o 00:03:29.697 LIB libspdk_env_dpdk_rpc.a 00:03:29.697 SO libspdk_env_dpdk_rpc.so.5.0 00:03:29.956 LIB libspdk_scheduler_gscheduler.a 00:03:29.956 SYMLINK libspdk_env_dpdk_rpc.so 00:03:29.956 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.956 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.956 SO libspdk_scheduler_gscheduler.so.3.0 00:03:29.956 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.956 LIB libspdk_scheduler_dynamic.a 00:03:29.956 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.956 CC module/accel/error/accel_error_rpc.o 00:03:29.956 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:29.956 SO libspdk_scheduler_dynamic.so.3.0 00:03:29.956 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.956 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.956 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.956 LIB libspdk_accel_dsa.a 00:03:29.956 LIB libspdk_accel_ioat.a 00:03:29.956 LIB libspdk_blob_bdev.a 00:03:29.956 LIB libspdk_accel_iaa.a 00:03:29.956 SO libspdk_accel_dsa.so.4.0 00:03:29.956 SO libspdk_blob_bdev.so.10.1 00:03:29.956 SO libspdk_accel_ioat.so.5.0 00:03:29.956 LIB libspdk_accel_error.a 00:03:29.956 SO libspdk_accel_iaa.so.2.0 00:03:30.214 SO libspdk_accel_error.so.1.0 00:03:30.214 SYMLINK libspdk_accel_ioat.so 00:03:30.214 SYMLINK libspdk_blob_bdev.so 00:03:30.214 SYMLINK libspdk_accel_dsa.so 00:03:30.214 SYMLINK libspdk_accel_error.so 00:03:30.214 SYMLINK libspdk_accel_iaa.so 00:03:30.214 CC module/bdev/error/vbdev_error.o 00:03:30.214 CC module/bdev/null/bdev_null.o 00:03:30.214 CC module/bdev/nvme/bdev_nvme.o 00:03:30.214 CC module/blobfs/bdev/blobfs_bdev.o 00:03:30.214 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.214 CC module/bdev/delay/vbdev_delay.o 00:03:30.214 CC module/bdev/malloc/bdev_malloc.o 00:03:30.214 CC module/bdev/gpt/gpt.o 00:03:30.214 CC module/bdev/passthru/vbdev_passthru.o 00:03:30.473 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:30.473 LIB libspdk_sock_posix.a 00:03:30.473 SO libspdk_sock_posix.so.5.0 00:03:30.473 CC module/bdev/gpt/vbdev_gpt.o 00:03:30.473 CC module/bdev/null/bdev_null_rpc.o 00:03:30.814 CC module/bdev/error/vbdev_error_rpc.o 00:03:30.814 SYMLINK libspdk_sock_posix.so 00:03:30.814 LIB libspdk_blobfs_bdev.a 00:03:30.814 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:30.814 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.814 SO libspdk_blobfs_bdev.so.5.0 00:03:30.814 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:30.814 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:30.814 SYMLINK libspdk_blobfs_bdev.so 00:03:30.814 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.814 LIB libspdk_bdev_null.a 00:03:30.814 LIB libspdk_bdev_error.a 00:03:30.814 LIB libspdk_bdev_gpt.a 00:03:30.814 SO libspdk_bdev_null.so.5.0 00:03:30.814 LIB libspdk_bdev_passthru.a 00:03:30.814 SO libspdk_bdev_error.so.5.0 00:03:30.814 LIB libspdk_bdev_delay.a 00:03:30.814 SO libspdk_bdev_gpt.so.5.0 00:03:30.814 SO libspdk_bdev_passthru.so.5.0 00:03:30.814 SO libspdk_bdev_delay.so.5.0 00:03:30.814 SYMLINK libspdk_bdev_null.so 00:03:30.814 SYMLINK libspdk_bdev_error.so 00:03:30.814 LIB libspdk_bdev_malloc.a 00:03:30.814 SYMLINK libspdk_bdev_gpt.so 00:03:31.073 SYMLINK libspdk_bdev_passthru.so 00:03:31.073 CC module/bdev/nvme/nvme_rpc.o 00:03:31.073 SO libspdk_bdev_malloc.so.5.0 00:03:31.073 SYMLINK libspdk_bdev_delay.so 00:03:31.073 CC module/bdev/nvme/bdev_mdns_client.o 00:03:31.073 CC module/bdev/raid/bdev_raid.o 00:03:31.073 CC module/bdev/split/vbdev_split.o 00:03:31.073 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.073 LIB libspdk_bdev_lvol.a 00:03:31.073 SYMLINK libspdk_bdev_malloc.so 00:03:31.073 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.073 CC module/bdev/xnvme/bdev_xnvme.o 00:03:31.073 SO libspdk_bdev_lvol.so.5.0 00:03:31.073 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.074 SYMLINK libspdk_bdev_lvol.so 00:03:31.074 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:31.074 CC module/bdev/nvme/vbdev_opal.o 00:03:31.074 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:31.333 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.333 LIB libspdk_bdev_split.a 00:03:31.334 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.334 LIB libspdk_bdev_xnvme.a 00:03:31.334 SO libspdk_bdev_split.so.5.0 00:03:31.334 SO libspdk_bdev_xnvme.so.2.0 00:03:31.334 LIB libspdk_bdev_zone_block.a 00:03:31.334 SYMLINK libspdk_bdev_split.so 00:03:31.334 CC module/bdev/raid/bdev_raid_sb.o 00:03:31.334 SO libspdk_bdev_zone_block.so.5.0 00:03:31.334 SYMLINK libspdk_bdev_xnvme.so 00:03:31.334 CC module/bdev/raid/raid0.o 00:03:31.334 CC module/bdev/raid/raid1.o 00:03:31.592 CC module/bdev/raid/concat.o 00:03:31.592 SYMLINK libspdk_bdev_zone_block.so 00:03:31.592 CC module/bdev/aio/bdev_aio.o 00:03:31.592 CC module/bdev/ftl/bdev_ftl.o 00:03:31.592 CC module/bdev/iscsi/bdev_iscsi.o 00:03:31.592 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:31.592 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:31.592 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:31.592 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.592 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:31.851 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:31.851 LIB libspdk_bdev_aio.a 00:03:31.851 SO libspdk_bdev_aio.so.5.0 00:03:31.851 LIB libspdk_bdev_raid.a 00:03:31.851 LIB libspdk_bdev_ftl.a 00:03:31.851 SO libspdk_bdev_ftl.so.5.0 00:03:31.851 SYMLINK libspdk_bdev_aio.so 00:03:31.851 LIB libspdk_bdev_iscsi.a 00:03:31.851 SO libspdk_bdev_raid.so.5.0 00:03:32.109 SO libspdk_bdev_iscsi.so.5.0 00:03:32.109 SYMLINK libspdk_bdev_ftl.so 00:03:32.109 SYMLINK libspdk_bdev_iscsi.so 00:03:32.109 SYMLINK libspdk_bdev_raid.so 00:03:32.109 LIB libspdk_bdev_virtio.a 00:03:32.109 SO libspdk_bdev_virtio.so.5.0 00:03:32.367 SYMLINK libspdk_bdev_virtio.so 00:03:32.626 LIB libspdk_bdev_nvme.a 00:03:32.884 SO libspdk_bdev_nvme.so.6.0 00:03:32.884 SYMLINK libspdk_bdev_nvme.so 00:03:33.451 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:33.451 CC module/event/subsystems/sock/sock.o 00:03:33.451 CC module/event/subsystems/vmd/vmd.o 00:03:33.451 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:33.451 CC module/event/subsystems/iobuf/iobuf.o 00:03:33.451 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:33.451 CC module/event/subsystems/scheduler/scheduler.o 00:03:33.451 LIB libspdk_event_vmd.a 00:03:33.451 LIB libspdk_event_sock.a 00:03:33.451 LIB libspdk_event_vhost_blk.a 00:03:33.451 SO libspdk_event_vmd.so.5.0 00:03:33.451 SO libspdk_event_sock.so.4.0 00:03:33.451 LIB libspdk_event_scheduler.a 00:03:33.451 SO libspdk_event_vhost_blk.so.2.0 00:03:33.451 LIB libspdk_event_iobuf.a 00:03:33.451 SO libspdk_event_scheduler.so.3.0 00:03:33.451 SYMLINK libspdk_event_vmd.so 00:03:33.451 SYMLINK libspdk_event_vhost_blk.so 00:03:33.451 SO libspdk_event_iobuf.so.2.0 00:03:33.709 SYMLINK libspdk_event_sock.so 00:03:33.709 SYMLINK libspdk_event_scheduler.so 00:03:33.709 SYMLINK libspdk_event_iobuf.so 00:03:33.709 CC module/event/subsystems/accel/accel.o 00:03:33.967 LIB libspdk_event_accel.a 00:03:33.967 SO libspdk_event_accel.so.5.0 00:03:34.226 SYMLINK libspdk_event_accel.so 00:03:34.226 CC module/event/subsystems/bdev/bdev.o 00:03:34.484 LIB libspdk_event_bdev.a 00:03:34.484 SO libspdk_event_bdev.so.5.0 00:03:34.742 SYMLINK libspdk_event_bdev.so 00:03:34.742 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.742 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.742 CC module/event/subsystems/nbd/nbd.o 00:03:34.742 CC module/event/subsystems/ublk/ublk.o 00:03:34.742 CC module/event/subsystems/scsi/scsi.o 00:03:35.001 LIB libspdk_event_ublk.a 00:03:35.001 LIB libspdk_event_scsi.a 00:03:35.001 LIB libspdk_event_nbd.a 00:03:35.001 SO libspdk_event_ublk.so.2.0 00:03:35.001 SO libspdk_event_scsi.so.5.0 00:03:35.001 SO libspdk_event_nbd.so.5.0 00:03:35.259 SYMLINK libspdk_event_ublk.so 00:03:35.259 SYMLINK libspdk_event_scsi.so 00:03:35.259 LIB libspdk_event_nvmf.a 00:03:35.259 SYMLINK libspdk_event_nbd.so 00:03:35.259 SO libspdk_event_nvmf.so.5.0 00:03:35.259 SYMLINK libspdk_event_nvmf.so 00:03:35.259 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.259 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.517 LIB libspdk_event_iscsi.a 00:03:35.517 LIB libspdk_event_vhost_scsi.a 00:03:35.517 SO libspdk_event_iscsi.so.5.0 00:03:35.517 SO libspdk_event_vhost_scsi.so.2.0 00:03:35.776 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.776 SYMLINK libspdk_event_iscsi.so 00:03:35.776 SO libspdk.so.5.0 00:03:35.776 SYMLINK libspdk.so 00:03:36.034 CC app/trace_record/trace_record.o 00:03:36.034 CXX app/trace/trace.o 00:03:36.034 CC examples/ioat/perf/perf.o 00:03:36.034 CC examples/sock/hello_world/hello_sock.o 00:03:36.034 CC examples/nvme/hello_world/hello_world.o 00:03:36.034 CC examples/accel/perf/accel_perf.o 00:03:36.034 CC test/app/bdev_svc/bdev_svc.o 00:03:36.035 CC examples/blob/hello_world/hello_blob.o 00:03:36.035 CC test/accel/dif/dif.o 00:03:36.035 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.292 LINK spdk_trace_record 00:03:36.292 LINK bdev_svc 00:03:36.292 LINK ioat_perf 00:03:36.292 LINK hello_bdev 00:03:36.292 LINK hello_world 00:03:36.292 LINK hello_sock 00:03:36.292 LINK hello_blob 00:03:36.550 LINK spdk_trace 00:03:36.550 CC examples/ioat/verify/verify.o 00:03:36.550 LINK dif 00:03:36.550 CC test/app/histogram_perf/histogram_perf.o 00:03:36.550 CC examples/nvme/reconnect/reconnect.o 00:03:36.550 LINK accel_perf 00:03:36.550 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.550 CC test/bdev/bdevio/bdevio.o 00:03:36.809 CC examples/blob/cli/blobcli.o 00:03:36.809 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.809 LINK verify 00:03:36.809 LINK histogram_perf 00:03:36.809 CC app/nvmf_tgt/nvmf_main.o 00:03:36.809 LINK nvmf_tgt 00:03:36.809 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.067 LINK reconnect 00:03:37.067 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.067 CC app/spdk_lspci/spdk_lspci.o 00:03:37.067 CC app/spdk_tgt/spdk_tgt.o 00:03:37.067 LINK nvme_fuzz 00:03:37.067 LINK bdevio 00:03:37.067 LINK spdk_lspci 00:03:37.067 LINK iscsi_tgt 00:03:37.067 LINK lsvmd 00:03:37.067 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.067 LINK blobcli 00:03:37.325 LINK spdk_tgt 00:03:37.325 CC examples/nvme/arbitration/arbitration.o 00:03:37.325 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:37.325 CC examples/vmd/led/led.o 00:03:37.325 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:37.325 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:37.325 CC app/spdk_nvme_perf/perf.o 00:03:37.583 CC examples/nvmf/nvmf/nvmf.o 00:03:37.583 LINK bdevperf 00:03:37.583 CC test/app/jsoncat/jsoncat.o 00:03:37.583 LINK led 00:03:37.583 CC app/spdk_nvme_identify/identify.o 00:03:37.583 LINK arbitration 00:03:37.583 LINK jsoncat 00:03:37.843 LINK nvmf 00:03:37.843 LINK nvme_manage 00:03:37.843 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.843 CC app/spdk_top/spdk_top.o 00:03:37.843 LINK vhost_fuzz 00:03:37.843 CC app/vhost/vhost.o 00:03:37.843 CC examples/util/zipf/zipf.o 00:03:38.101 CC examples/nvme/hotplug/hotplug.o 00:03:38.101 LINK spdk_nvme_discover 00:03:38.101 CC app/spdk_dd/spdk_dd.o 00:03:38.101 LINK vhost 00:03:38.101 CC examples/thread/thread/thread_ex.o 00:03:38.101 LINK zipf 00:03:38.359 LINK spdk_nvme_perf 00:03:38.359 CC test/app/stub/stub.o 00:03:38.359 LINK hotplug 00:03:38.359 LINK thread 00:03:38.359 CC examples/idxd/perf/perf.o 00:03:38.359 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.359 LINK spdk_dd 00:03:38.359 LINK stub 00:03:38.359 LINK spdk_nvme_identify 00:03:38.618 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.618 CC app/fio/nvme/fio_plugin.o 00:03:38.618 LINK interrupt_tgt 00:03:38.618 CC app/fio/bdev/fio_plugin.o 00:03:38.618 LINK cmb_copy 00:03:38.618 LINK spdk_top 00:03:38.618 CC examples/nvme/abort/abort.o 00:03:38.618 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.618 LINK idxd_perf 00:03:38.877 CC test/blobfs/mkfs/mkfs.o 00:03:38.877 TEST_HEADER include/spdk/accel.h 00:03:38.877 TEST_HEADER include/spdk/accel_module.h 00:03:38.877 TEST_HEADER include/spdk/assert.h 00:03:38.877 TEST_HEADER include/spdk/barrier.h 00:03:38.877 TEST_HEADER include/spdk/base64.h 00:03:38.877 TEST_HEADER include/spdk/bdev.h 00:03:38.877 TEST_HEADER include/spdk/bdev_module.h 00:03:38.877 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.877 TEST_HEADER include/spdk/bit_array.h 00:03:38.877 TEST_HEADER include/spdk/bit_pool.h 00:03:38.877 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.877 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.877 TEST_HEADER include/spdk/blobfs.h 00:03:38.877 TEST_HEADER include/spdk/blob.h 00:03:38.877 TEST_HEADER include/spdk/conf.h 00:03:38.877 TEST_HEADER include/spdk/config.h 00:03:38.877 TEST_HEADER include/spdk/cpuset.h 00:03:38.877 TEST_HEADER include/spdk/crc16.h 00:03:38.877 TEST_HEADER include/spdk/crc32.h 00:03:38.877 TEST_HEADER include/spdk/crc64.h 00:03:38.877 TEST_HEADER include/spdk/dif.h 00:03:38.877 TEST_HEADER include/spdk/dma.h 00:03:38.877 TEST_HEADER include/spdk/endian.h 00:03:38.877 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.877 TEST_HEADER include/spdk/env.h 00:03:38.877 TEST_HEADER include/spdk/event.h 00:03:38.877 TEST_HEADER include/spdk/fd_group.h 00:03:38.877 TEST_HEADER include/spdk/fd.h 00:03:38.877 TEST_HEADER include/spdk/file.h 00:03:38.877 LINK pmr_persistence 00:03:38.877 TEST_HEADER include/spdk/ftl.h 00:03:38.877 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.877 TEST_HEADER include/spdk/hexlify.h 00:03:38.877 TEST_HEADER include/spdk/histogram_data.h 00:03:38.877 TEST_HEADER include/spdk/idxd.h 00:03:38.877 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.877 TEST_HEADER include/spdk/init.h 00:03:38.877 TEST_HEADER include/spdk/ioat.h 00:03:38.877 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.877 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.877 TEST_HEADER include/spdk/json.h 00:03:38.877 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.877 TEST_HEADER include/spdk/likely.h 00:03:38.877 TEST_HEADER include/spdk/log.h 00:03:38.877 TEST_HEADER include/spdk/lvol.h 00:03:38.877 TEST_HEADER include/spdk/memory.h 00:03:38.877 TEST_HEADER include/spdk/mmio.h 00:03:38.877 TEST_HEADER include/spdk/nbd.h 00:03:38.877 TEST_HEADER include/spdk/notify.h 00:03:38.877 TEST_HEADER include/spdk/nvme.h 00:03:38.877 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.877 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.877 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.877 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.877 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.877 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.877 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.877 TEST_HEADER include/spdk/nvmf.h 00:03:38.877 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.877 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.877 TEST_HEADER include/spdk/opal.h 00:03:38.877 TEST_HEADER include/spdk/opal_spec.h 00:03:38.877 LINK mkfs 00:03:38.877 TEST_HEADER include/spdk/pci_ids.h 00:03:38.877 TEST_HEADER include/spdk/pipe.h 00:03:38.877 TEST_HEADER include/spdk/queue.h 00:03:38.877 TEST_HEADER include/spdk/reduce.h 00:03:38.877 TEST_HEADER include/spdk/rpc.h 00:03:38.877 TEST_HEADER include/spdk/scheduler.h 00:03:38.877 TEST_HEADER include/spdk/scsi.h 00:03:38.877 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.877 TEST_HEADER include/spdk/sock.h 00:03:38.877 TEST_HEADER include/spdk/stdinc.h 00:03:38.877 TEST_HEADER include/spdk/string.h 00:03:38.877 TEST_HEADER include/spdk/thread.h 00:03:38.877 TEST_HEADER include/spdk/trace.h 00:03:38.877 CC test/event/event_perf/event_perf.o 00:03:38.877 TEST_HEADER include/spdk/trace_parser.h 00:03:38.877 TEST_HEADER include/spdk/tree.h 00:03:38.877 TEST_HEADER include/spdk/ublk.h 00:03:38.877 TEST_HEADER include/spdk/util.h 00:03:38.877 TEST_HEADER include/spdk/uuid.h 00:03:38.877 TEST_HEADER include/spdk/version.h 00:03:39.136 CC test/dma/test_dma/test_dma.o 00:03:39.136 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.136 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.136 TEST_HEADER include/spdk/vhost.h 00:03:39.136 TEST_HEADER include/spdk/vmd.h 00:03:39.136 TEST_HEADER include/spdk/xor.h 00:03:39.136 TEST_HEADER include/spdk/zipf.h 00:03:39.136 CXX test/cpp_headers/accel.o 00:03:39.136 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.136 LINK abort 00:03:39.136 LINK spdk_nvme 00:03:39.136 LINK spdk_bdev 00:03:39.136 LINK event_perf 00:03:39.136 CXX test/cpp_headers/accel_module.o 00:03:39.136 CC test/lvol/esnap/esnap.o 00:03:39.136 LINK iscsi_fuzz 00:03:39.136 CC test/rpc_client/rpc_client_test.o 00:03:39.394 CC test/nvme/aer/aer.o 00:03:39.394 CC test/nvme/reset/reset.o 00:03:39.394 CC test/nvme/sgl/sgl.o 00:03:39.394 CXX test/cpp_headers/assert.o 00:03:39.394 CC test/event/reactor/reactor.o 00:03:39.394 LINK test_dma 00:03:39.394 LINK rpc_client_test 00:03:39.394 CXX test/cpp_headers/barrier.o 00:03:39.394 LINK reactor 00:03:39.653 LINK mem_callbacks 00:03:39.653 LINK reset 00:03:39.653 LINK sgl 00:03:39.653 LINK aer 00:03:39.653 CC test/thread/poller_perf/poller_perf.o 00:03:39.653 CXX test/cpp_headers/base64.o 00:03:39.653 CC test/env/vtophys/vtophys.o 00:03:39.653 CC test/event/reactor_perf/reactor_perf.o 00:03:39.653 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.653 CXX test/cpp_headers/bdev.o 00:03:39.653 CC test/event/app_repeat/app_repeat.o 00:03:39.653 CXX test/cpp_headers/bdev_module.o 00:03:39.911 LINK poller_perf 00:03:39.911 LINK vtophys 00:03:39.911 LINK reactor_perf 00:03:39.911 CC test/nvme/e2edp/nvme_dp.o 00:03:39.911 LINK env_dpdk_post_init 00:03:39.911 LINK app_repeat 00:03:39.911 CC test/event/scheduler/scheduler.o 00:03:39.911 CXX test/cpp_headers/bdev_zone.o 00:03:39.911 CXX test/cpp_headers/bit_array.o 00:03:39.911 CXX test/cpp_headers/bit_pool.o 00:03:39.911 CXX test/cpp_headers/blob_bdev.o 00:03:39.911 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.174 CXX test/cpp_headers/blobfs.o 00:03:40.174 CXX test/cpp_headers/blob.o 00:03:40.174 LINK nvme_dp 00:03:40.174 CC test/env/memory/memory_ut.o 00:03:40.174 CXX test/cpp_headers/conf.o 00:03:40.174 LINK scheduler 00:03:40.174 CXX test/cpp_headers/config.o 00:03:40.174 CC test/nvme/overhead/overhead.o 00:03:40.174 CXX test/cpp_headers/cpuset.o 00:03:40.174 CC test/nvme/err_injection/err_injection.o 00:03:40.174 CXX test/cpp_headers/crc16.o 00:03:40.174 CC test/env/pci/pci_ut.o 00:03:40.174 CC test/nvme/startup/startup.o 00:03:40.174 CC test/nvme/reserve/reserve.o 00:03:40.432 CC test/nvme/simple_copy/simple_copy.o 00:03:40.432 LINK err_injection 00:03:40.432 CC test/nvme/connect_stress/connect_stress.o 00:03:40.432 CXX test/cpp_headers/crc32.o 00:03:40.432 LINK overhead 00:03:40.432 LINK startup 00:03:40.432 LINK reserve 00:03:40.432 LINK connect_stress 00:03:40.432 CXX test/cpp_headers/crc64.o 00:03:40.691 CC test/nvme/boot_partition/boot_partition.o 00:03:40.691 LINK simple_copy 00:03:40.691 CC test/nvme/compliance/nvme_compliance.o 00:03:40.691 LINK pci_ut 00:03:40.691 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.691 CXX test/cpp_headers/dif.o 00:03:40.691 LINK boot_partition 00:03:40.691 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.691 CC test/nvme/fdp/fdp.o 00:03:40.691 CC test/nvme/cuse/cuse.o 00:03:40.949 CXX test/cpp_headers/dma.o 00:03:40.949 LINK memory_ut 00:03:40.949 LINK fused_ordering 00:03:40.949 CXX test/cpp_headers/endian.o 00:03:40.949 CXX test/cpp_headers/env_dpdk.o 00:03:40.949 LINK doorbell_aers 00:03:40.949 LINK nvme_compliance 00:03:40.949 CXX test/cpp_headers/env.o 00:03:40.949 CXX test/cpp_headers/event.o 00:03:40.949 LINK fdp 00:03:40.949 CXX test/cpp_headers/fd_group.o 00:03:41.217 CXX test/cpp_headers/fd.o 00:03:41.218 CXX test/cpp_headers/file.o 00:03:41.218 CXX test/cpp_headers/ftl.o 00:03:41.218 CXX test/cpp_headers/gpt_spec.o 00:03:41.218 CXX test/cpp_headers/hexlify.o 00:03:41.218 CXX test/cpp_headers/histogram_data.o 00:03:41.218 CXX test/cpp_headers/idxd.o 00:03:41.218 CXX test/cpp_headers/idxd_spec.o 00:03:41.218 CXX test/cpp_headers/init.o 00:03:41.218 CXX test/cpp_headers/ioat.o 00:03:41.218 CXX test/cpp_headers/ioat_spec.o 00:03:41.218 CXX test/cpp_headers/iscsi_spec.o 00:03:41.477 CXX test/cpp_headers/json.o 00:03:41.477 CXX test/cpp_headers/jsonrpc.o 00:03:41.477 CXX test/cpp_headers/likely.o 00:03:41.477 CXX test/cpp_headers/log.o 00:03:41.477 CXX test/cpp_headers/lvol.o 00:03:41.477 CXX test/cpp_headers/memory.o 00:03:41.477 CXX test/cpp_headers/mmio.o 00:03:41.477 CXX test/cpp_headers/nbd.o 00:03:41.477 CXX test/cpp_headers/notify.o 00:03:41.477 CXX test/cpp_headers/nvme.o 00:03:41.477 CXX test/cpp_headers/nvme_intel.o 00:03:41.477 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.477 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.477 CXX test/cpp_headers/nvme_spec.o 00:03:41.477 CXX test/cpp_headers/nvme_zns.o 00:03:41.736 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.736 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.736 CXX test/cpp_headers/nvmf.o 00:03:41.736 CXX test/cpp_headers/nvmf_spec.o 00:03:41.736 CXX test/cpp_headers/nvmf_transport.o 00:03:41.736 CXX test/cpp_headers/opal.o 00:03:41.736 CXX test/cpp_headers/opal_spec.o 00:03:41.736 CXX test/cpp_headers/pci_ids.o 00:03:41.736 LINK cuse 00:03:41.736 CXX test/cpp_headers/pipe.o 00:03:41.736 CXX test/cpp_headers/queue.o 00:03:41.736 CXX test/cpp_headers/reduce.o 00:03:41.736 CXX test/cpp_headers/rpc.o 00:03:41.993 CXX test/cpp_headers/scheduler.o 00:03:41.993 CXX test/cpp_headers/scsi.o 00:03:41.993 CXX test/cpp_headers/scsi_spec.o 00:03:41.993 CXX test/cpp_headers/sock.o 00:03:41.993 CXX test/cpp_headers/stdinc.o 00:03:41.993 CXX test/cpp_headers/string.o 00:03:41.993 CXX test/cpp_headers/thread.o 00:03:41.993 CXX test/cpp_headers/trace.o 00:03:41.993 CXX test/cpp_headers/trace_parser.o 00:03:41.993 CXX test/cpp_headers/tree.o 00:03:41.993 CXX test/cpp_headers/ublk.o 00:03:41.993 CXX test/cpp_headers/util.o 00:03:41.993 CXX test/cpp_headers/uuid.o 00:03:41.993 CXX test/cpp_headers/version.o 00:03:41.993 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.993 CXX test/cpp_headers/vfio_user_spec.o 00:03:42.250 CXX test/cpp_headers/vhost.o 00:03:42.250 CXX test/cpp_headers/vmd.o 00:03:42.250 CXX test/cpp_headers/xor.o 00:03:42.250 CXX test/cpp_headers/zipf.o 00:03:44.778 LINK esnap 00:03:44.778 00:03:44.778 real 1m5.682s 00:03:44.778 user 6m2.734s 00:03:44.778 sys 1m45.356s 00:03:44.778 05:01:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:44.778 ************************************ 00:03:44.778 END TEST make 00:03:44.778 ************************************ 00:03:44.778 05:01:03 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.038 05:01:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.038 05:01:03 -- nvmf/common.sh@7 -- # uname -s 00:03:45.038 05:01:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.038 05:01:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.038 05:01:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.038 05:01:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.038 05:01:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.038 05:01:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.038 05:01:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.038 05:01:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.038 05:01:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.038 05:01:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.038 05:01:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b65a3717-0a3d-4378-888e-eeb94342b361 00:03:45.038 05:01:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b65a3717-0a3d-4378-888e-eeb94342b361 00:03:45.038 05:01:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.038 05:01:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.038 05:01:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.038 05:01:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.038 05:01:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.038 05:01:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.038 05:01:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.038 05:01:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.038 05:01:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.038 05:01:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.038 05:01:04 -- paths/export.sh@5 -- # export PATH 00:03:45.038 05:01:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.038 05:01:04 -- nvmf/common.sh@46 -- # : 0 00:03:45.038 05:01:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:45.038 05:01:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:45.038 05:01:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:45.038 05:01:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.038 05:01:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.038 05:01:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:45.038 05:01:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:45.038 05:01:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:45.038 05:01:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.038 05:01:04 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.038 05:01:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.038 05:01:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.038 05:01:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.038 05:01:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.038 05:01:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.038 05:01:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.038 05:01:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.038 05:01:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.038 05:01:04 -- spdk/autotest.sh@48 -- # udevadm_pid=48332 00:03:45.038 05:01:04 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.038 05:01:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.038 05:01:04 -- spdk/autotest.sh@54 -- # echo 48351 00:03:45.038 05:01:04 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.038 05:01:04 -- spdk/autotest.sh@56 -- # echo 48359 00:03:45.038 05:01:04 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.038 05:01:04 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:45.038 05:01:04 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.038 05:01:04 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:45.038 05:01:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:45.038 05:01:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.038 05:01:04 -- spdk/autotest.sh@70 -- # create_test_list 00:03:45.038 05:01:04 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:45.038 05:01:04 -- common/autotest_common.sh@10 -- # set +x 00:03:45.296 05:01:04 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.296 05:01:04 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.296 05:01:04 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.296 05:01:04 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.296 05:01:04 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.296 05:01:04 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:45.296 05:01:04 -- common/autotest_common.sh@1440 -- # uname 00:03:45.296 05:01:04 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:45.296 05:01:04 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:45.296 05:01:04 -- common/autotest_common.sh@1460 -- # uname 00:03:45.296 05:01:04 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:45.296 05:01:04 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:45.296 05:01:04 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:45.296 05:01:04 -- spdk/autotest.sh@83 -- # hash lcov 00:03:45.296 05:01:04 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:45.296 05:01:04 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:45.296 --rc lcov_branch_coverage=1 00:03:45.296 --rc lcov_function_coverage=1 00:03:45.296 --rc genhtml_branch_coverage=1 00:03:45.296 --rc genhtml_function_coverage=1 00:03:45.296 --rc genhtml_legend=1 00:03:45.296 --rc geninfo_all_blocks=1 00:03:45.296 ' 00:03:45.296 05:01:04 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:45.296 --rc lcov_branch_coverage=1 00:03:45.296 --rc lcov_function_coverage=1 00:03:45.296 --rc genhtml_branch_coverage=1 00:03:45.296 --rc genhtml_function_coverage=1 00:03:45.296 --rc genhtml_legend=1 00:03:45.296 --rc geninfo_all_blocks=1 00:03:45.296 ' 00:03:45.296 05:01:04 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:45.296 --rc lcov_branch_coverage=1 00:03:45.296 --rc lcov_function_coverage=1 00:03:45.296 --rc genhtml_branch_coverage=1 00:03:45.296 --rc genhtml_function_coverage=1 00:03:45.296 --rc genhtml_legend=1 00:03:45.296 --rc geninfo_all_blocks=1 00:03:45.296 --no-external' 00:03:45.296 05:01:04 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:45.296 --rc lcov_branch_coverage=1 00:03:45.296 --rc lcov_function_coverage=1 00:03:45.296 --rc genhtml_branch_coverage=1 00:03:45.296 --rc genhtml_function_coverage=1 00:03:45.296 --rc genhtml_legend=1 00:03:45.296 --rc geninfo_all_blocks=1 00:03:45.296 --no-external' 00:03:45.296 05:01:04 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:45.296 lcov: LCOV version 1.14 00:03:45.296 05:01:04 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:53.417 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:53.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:53.417 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:53.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:53.417 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:53.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:11.534 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:11.534 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:11.535 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:11.535 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:12.104 05:01:31 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:12.104 05:01:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:12.104 05:01:31 -- common/autotest_common.sh@10 -- # set +x 00:04:12.104 05:01:31 -- spdk/autotest.sh@102 -- # rm -f 00:04:12.104 05:01:31 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.483 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:04:13.483 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:04:13.483 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:13.483 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:13.483 05:01:32 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:13.483 05:01:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:13.483 05:01:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:13.483 05:01:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:13.483 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.483 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.483 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.483 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:13.483 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.483 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:13.483 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:13.483 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.483 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.483 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:13.483 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:13.483 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:13.484 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.484 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.484 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:13.484 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:13.484 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:13.484 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.484 05:01:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:13.484 05:01:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:13.484 05:01:32 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:13.484 05:01:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:13.484 05:01:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:13.484 05:01:32 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:13.484 05:01:32 -- spdk/autotest.sh@121 -- # grep -v p 00:04:13.484 05:01:32 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:04:13.484 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.484 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.484 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:13.484 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:13.484 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:13.484 No valid GPT data, bailing 00:04:13.484 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.484 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:13.484 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:13.484 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:13.484 1+0 records in 00:04:13.484 1+0 records out 00:04:13.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011043 s, 95.0 MB/s 00:04:13.484 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.484 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.484 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:13.484 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:13.484 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:13.484 No valid GPT data, bailing 00:04:13.484 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:13.744 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:13.744 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:13.744 1+0 records in 00:04:13.744 1+0 records out 00:04:13.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516357 s, 203 MB/s 00:04:13.744 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.744 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.744 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n1 00:04:13.744 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:04:13.744 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:13.744 No valid GPT data, bailing 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:13.744 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:13.744 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:13.744 1+0 records in 00:04:13.744 1+0 records out 00:04:13.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601622 s, 174 MB/s 00:04:13.744 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.744 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.744 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n2 00:04:13.744 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:04:13.744 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:13.744 No valid GPT data, bailing 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:13.744 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:13.744 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:13.744 1+0 records in 00:04:13.744 1+0 records out 00:04:13.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509473 s, 206 MB/s 00:04:13.744 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.744 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.744 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme2n3 00:04:13.744 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:04:13.744 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:13.744 No valid GPT data, bailing 00:04:13.744 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:14.003 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:14.003 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:14.003 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:14.003 1+0 records in 00:04:14.003 1+0 records out 00:04:14.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540151 s, 194 MB/s 00:04:14.003 05:01:32 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:14.003 05:01:32 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:14.003 05:01:32 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme3n1 00:04:14.003 05:01:32 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:04:14.003 05:01:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:14.003 No valid GPT data, bailing 00:04:14.003 05:01:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:14.003 05:01:32 -- scripts/common.sh@393 -- # pt= 00:04:14.003 05:01:32 -- scripts/common.sh@394 -- # return 1 00:04:14.003 05:01:32 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:14.003 1+0 records in 00:04:14.003 1+0 records out 00:04:14.003 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456809 s, 230 MB/s 00:04:14.003 05:01:32 -- spdk/autotest.sh@129 -- # sync 00:04:14.262 05:01:33 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.262 05:01:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.262 05:01:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.201 05:01:35 -- spdk/autotest.sh@135 -- # uname -s 00:04:16.201 05:01:35 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:16.201 05:01:35 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.201 05:01:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.201 05:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.201 05:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:16.201 ************************************ 00:04:16.201 START TEST setup.sh 00:04:16.201 ************************************ 00:04:16.201 05:01:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:16.460 * Looking for test storage... 00:04:16.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.460 05:01:35 -- setup/test-setup.sh@10 -- # uname -s 00:04:16.460 05:01:35 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:16.460 05:01:35 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.460 05:01:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.460 05:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.460 05:01:35 -- common/autotest_common.sh@10 -- # set +x 00:04:16.460 ************************************ 00:04:16.460 START TEST acl 00:04:16.460 ************************************ 00:04:16.460 05:01:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:16.460 * Looking for test storage... 00:04:16.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.460 05:01:35 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:16.460 05:01:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.460 05:01:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.460 05:01:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n2 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme2n2 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n3 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme2n3 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3c3n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme3c3n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.460 05:01:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:16.460 05:01:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:16.460 05:01:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.460 05:01:35 -- setup/acl.sh@12 -- # devs=() 00:04:16.460 05:01:35 -- setup/acl.sh@12 -- # declare -a devs 00:04:16.460 05:01:35 -- setup/acl.sh@13 -- # drivers=() 00:04:16.460 05:01:35 -- setup/acl.sh@13 -- # declare -A drivers 00:04:16.460 05:01:35 -- setup/acl.sh@51 -- # setup reset 00:04:16.460 05:01:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.460 05:01:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.855 05:01:36 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:17.855 05:01:36 -- setup/acl.sh@16 -- # local dev driver 00:04:17.855 05:01:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.855 05:01:36 -- setup/acl.sh@15 -- # setup output status 00:04:17.855 05:01:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.855 05:01:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.113 Hugepages 00:04:18.113 node hugesize free / total 00:04:18.113 05:01:36 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:18.113 05:01:36 -- setup/acl.sh@19 -- # continue 00:04:18.113 05:01:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.113 00:04:18.114 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.114 05:01:36 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:18.114 05:01:36 -- setup/acl.sh@19 -- # continue 00:04:18.114 05:01:36 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.114 05:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:18.114 05:01:37 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:18.114 05:01:37 -- setup/acl.sh@20 -- # continue 00:04:18.114 05:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.114 05:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:18.114 05:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.114 05:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:18.114 05:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.114 05:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.114 05:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.373 05:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.373 05:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.373 05:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.373 05:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.373 05:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.373 05:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.373 05:01:37 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.373 05:01:37 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.373 05:01:37 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.373 05:01:37 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.373 05:01:37 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:18.373 05:01:37 -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.373 05:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.373 05:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.373 05:01:37 -- common/autotest_common.sh@10 -- # set +x 00:04:18.373 ************************************ 00:04:18.373 START TEST denied 00:04:18.373 ************************************ 00:04:18.373 05:01:37 -- common/autotest_common.sh@1104 -- # denied 00:04:18.373 05:01:37 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:18.373 05:01:37 -- setup/acl.sh@38 -- # setup output config 00:04:18.373 05:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.373 05:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.373 05:01:37 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:19.749 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:19.749 05:01:38 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:19.749 05:01:38 -- setup/acl.sh@28 -- # local dev driver 00:04:19.749 05:01:38 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.749 05:01:38 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:19.750 05:01:38 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:19.750 05:01:38 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.750 05:01:38 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.750 05:01:38 -- setup/acl.sh@41 -- # setup reset 00:04:19.750 05:01:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.750 05:01:38 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.314 00:04:26.314 real 0m7.497s 00:04:26.314 user 0m0.900s 00:04:26.314 sys 0m1.682s 00:04:26.314 05:01:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.314 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:26.314 ************************************ 00:04:26.314 END TEST denied 00:04:26.314 ************************************ 00:04:26.314 05:01:44 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:26.314 05:01:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.314 05:01:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.314 05:01:44 -- common/autotest_common.sh@10 -- # set +x 00:04:26.314 ************************************ 00:04:26.314 START TEST allowed 00:04:26.314 ************************************ 00:04:26.314 05:01:44 -- common/autotest_common.sh@1104 -- # allowed 00:04:26.314 05:01:44 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:26.314 05:01:44 -- setup/acl.sh@45 -- # setup output config 00:04:26.314 05:01:44 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:26.314 05:01:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.314 05:01:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.251 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.251 05:01:46 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:27.251 05:01:46 -- setup/acl.sh@28 -- # local dev driver 00:04:27.251 05:01:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.251 05:01:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.251 05:01:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.251 05:01:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.251 05:01:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.251 05:01:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.251 05:01:46 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.251 05:01:46 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:27.251 05:01:46 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.251 05:01:46 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.251 05:01:46 -- setup/acl.sh@48 -- # setup reset 00:04:27.251 05:01:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.251 05:01:46 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.627 ************************************ 00:04:28.627 END TEST allowed 00:04:28.627 ************************************ 00:04:28.627 00:04:28.627 real 0m2.524s 00:04:28.627 user 0m1.039s 00:04:28.627 sys 0m1.498s 00:04:28.627 05:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.627 05:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.627 ************************************ 00:04:28.627 END TEST acl 00:04:28.627 ************************************ 00:04:28.627 00:04:28.627 real 0m12.161s 00:04:28.627 user 0m2.860s 00:04:28.627 sys 0m4.443s 00:04:28.627 05:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.627 05:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.627 05:01:47 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.627 05:01:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.627 05:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.627 05:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.627 ************************************ 00:04:28.627 START TEST hugepages 00:04:28.627 ************************************ 00:04:28.627 05:01:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.627 * Looking for test storage... 00:04:28.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:28.627 05:01:47 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:28.627 05:01:47 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:28.627 05:01:47 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:28.627 05:01:47 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:28.627 05:01:47 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:28.627 05:01:47 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:28.627 05:01:47 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:28.627 05:01:47 -- setup/common.sh@18 -- # local node= 00:04:28.627 05:01:47 -- setup/common.sh@19 -- # local var val 00:04:28.627 05:01:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.627 05:01:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.627 05:01:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.627 05:01:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.627 05:01:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.627 05:01:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 5862896 kB' 'MemAvailable: 7402952 kB' 'Buffers: 2436 kB' 'Cached: 1753976 kB' 'SwapCached: 0 kB' 'Active: 444816 kB' 'Inactive: 1413916 kB' 'Active(anon): 112832 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413916 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 104028 kB' 'Mapped: 48584 kB' 'Shmem: 10512 kB' 'KReclaimable: 62636 kB' 'Slab: 137988 kB' 'SReclaimable: 62636 kB' 'SUnreclaim: 75352 kB' 'KernelStack: 6412 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 328368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.627 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.627 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.628 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.628 05:01:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.629 05:01:47 -- setup/common.sh@32 -- # continue 00:04:28.629 05:01:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.888 05:01:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.888 05:01:47 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.888 05:01:47 -- setup/common.sh@33 -- # echo 2048 00:04:28.888 05:01:47 -- setup/common.sh@33 -- # return 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:28.888 05:01:47 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:28.888 05:01:47 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:28.888 05:01:47 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:28.888 05:01:47 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:28.888 05:01:47 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:28.888 05:01:47 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:28.888 05:01:47 -- setup/hugepages.sh@207 -- # get_nodes 00:04:28.888 05:01:47 -- setup/hugepages.sh@27 -- # local node 00:04:28.888 05:01:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.888 05:01:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:28.888 05:01:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:28.888 05:01:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.888 05:01:47 -- setup/hugepages.sh@208 -- # clear_hp 00:04:28.888 05:01:47 -- setup/hugepages.sh@37 -- # local node hp 00:04:28.888 05:01:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.888 05:01:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.888 05:01:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.888 05:01:47 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:28.888 05:01:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:28.888 05:01:47 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:28.888 05:01:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.888 05:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.888 05:01:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.888 ************************************ 00:04:28.888 START TEST default_setup 00:04:28.888 ************************************ 00:04:28.888 05:01:47 -- common/autotest_common.sh@1104 -- # default_setup 00:04:28.888 05:01:47 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.888 05:01:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.888 05:01:47 -- setup/hugepages.sh@51 -- # shift 00:04:28.888 05:01:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.888 05:01:47 -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.888 05:01:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.888 05:01:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.888 05:01:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.888 05:01:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.888 05:01:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.888 05:01:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:28.888 05:01:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.888 05:01:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.888 05:01:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.888 05:01:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.888 05:01:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.888 05:01:47 -- setup/hugepages.sh@73 -- # return 0 00:04:28.888 05:01:47 -- setup/hugepages.sh@137 -- # setup output 00:04:28.888 05:01:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.888 05:01:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.084 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.084 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.084 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.084 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.346 05:01:49 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:30.346 05:01:49 -- setup/hugepages.sh@89 -- # local node 00:04:30.346 05:01:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.346 05:01:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.346 05:01:49 -- setup/hugepages.sh@92 -- # local surp 00:04:30.346 05:01:49 -- setup/hugepages.sh@93 -- # local resv 00:04:30.346 05:01:49 -- setup/hugepages.sh@94 -- # local anon 00:04:30.346 05:01:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.346 05:01:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.346 05:01:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.346 05:01:49 -- setup/common.sh@18 -- # local node= 00:04:30.346 05:01:49 -- setup/common.sh@19 -- # local var val 00:04:30.346 05:01:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.346 05:01:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.346 05:01:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.346 05:01:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.346 05:01:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.346 05:01:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7907820 kB' 'MemAvailable: 9447616 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460748 kB' 'Inactive: 1413932 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119876 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137336 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75248 kB' 'KernelStack: 6368 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.346 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.346 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.347 05:01:49 -- setup/common.sh@33 -- # echo 0 00:04:30.347 05:01:49 -- setup/common.sh@33 -- # return 0 00:04:30.347 05:01:49 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.347 05:01:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.347 05:01:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.347 05:01:49 -- setup/common.sh@18 -- # local node= 00:04:30.347 05:01:49 -- setup/common.sh@19 -- # local var val 00:04:30.347 05:01:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.347 05:01:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.347 05:01:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.347 05:01:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.347 05:01:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.347 05:01:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7907820 kB' 'MemAvailable: 9447616 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460420 kB' 'Inactive: 1413932 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137328 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75240 kB' 'KernelStack: 6336 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.347 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.347 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.348 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.348 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.349 05:01:49 -- setup/common.sh@33 -- # echo 0 00:04:30.349 05:01:49 -- setup/common.sh@33 -- # return 0 00:04:30.349 05:01:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.349 05:01:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.349 05:01:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.349 05:01:49 -- setup/common.sh@18 -- # local node= 00:04:30.349 05:01:49 -- setup/common.sh@19 -- # local var val 00:04:30.349 05:01:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.349 05:01:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.349 05:01:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.349 05:01:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.349 05:01:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.349 05:01:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.349 05:01:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7907820 kB' 'MemAvailable: 9447616 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460408 kB' 'Inactive: 1413932 kB' 'Active(anon): 128424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119524 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137324 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75236 kB' 'KernelStack: 6352 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.349 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.349 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.350 05:01:49 -- setup/common.sh@33 -- # echo 0 00:04:30.350 05:01:49 -- setup/common.sh@33 -- # return 0 00:04:30.350 05:01:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.350 nr_hugepages=1024 00:04:30.350 05:01:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.350 resv_hugepages=0 00:04:30.350 05:01:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.350 surplus_hugepages=0 00:04:30.350 05:01:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.350 anon_hugepages=0 00:04:30.350 05:01:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.350 05:01:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.350 05:01:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.350 05:01:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.350 05:01:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.350 05:01:49 -- setup/common.sh@18 -- # local node= 00:04:30.350 05:01:49 -- setup/common.sh@19 -- # local var val 00:04:30.350 05:01:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.350 05:01:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.350 05:01:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.350 05:01:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.350 05:01:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.350 05:01:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7907820 kB' 'MemAvailable: 9447616 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460300 kB' 'Inactive: 1413932 kB' 'Active(anon): 128316 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119420 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137324 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75236 kB' 'KernelStack: 6352 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.350 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.350 05:01:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.351 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.351 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.352 05:01:49 -- setup/common.sh@33 -- # echo 1024 00:04:30.352 05:01:49 -- setup/common.sh@33 -- # return 0 00:04:30.352 05:01:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.352 05:01:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.352 05:01:49 -- setup/hugepages.sh@27 -- # local node 00:04:30.352 05:01:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.352 05:01:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.352 05:01:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.352 05:01:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.352 05:01:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.352 05:01:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.352 05:01:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.352 05:01:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.352 05:01:49 -- setup/common.sh@18 -- # local node=0 00:04:30.352 05:01:49 -- setup/common.sh@19 -- # local var val 00:04:30.352 05:01:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.352 05:01:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.352 05:01:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.352 05:01:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.352 05:01:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.352 05:01:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7907820 kB' 'MemUsed: 4334144 kB' 'SwapCached: 0 kB' 'Active: 460208 kB' 'Inactive: 1413932 kB' 'Active(anon): 128224 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1756392 kB' 'Mapped: 48612 kB' 'AnonPages: 119640 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137324 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.352 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.352 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # continue 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.353 05:01:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.353 05:01:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.353 05:01:49 -- setup/common.sh@33 -- # echo 0 00:04:30.353 05:01:49 -- setup/common.sh@33 -- # return 0 00:04:30.353 05:01:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.353 05:01:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.353 05:01:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.353 05:01:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.353 05:01:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.353 node0=1024 expecting 1024 00:04:30.353 05:01:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.353 00:04:30.353 real 0m1.576s 00:04:30.353 user 0m0.606s 00:04:30.353 sys 0m0.939s 00:04:30.353 05:01:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.353 05:01:49 -- common/autotest_common.sh@10 -- # set +x 00:04:30.353 ************************************ 00:04:30.353 END TEST default_setup 00:04:30.353 ************************************ 00:04:30.353 05:01:49 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:30.353 05:01:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.353 05:01:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.353 05:01:49 -- common/autotest_common.sh@10 -- # set +x 00:04:30.353 ************************************ 00:04:30.353 START TEST per_node_1G_alloc 00:04:30.353 ************************************ 00:04:30.353 05:01:49 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:30.353 05:01:49 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:30.353 05:01:49 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:30.353 05:01:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.353 05:01:49 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.353 05:01:49 -- setup/hugepages.sh@51 -- # shift 00:04:30.353 05:01:49 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.353 05:01:49 -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.353 05:01:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.353 05:01:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.353 05:01:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.353 05:01:49 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.353 05:01:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.353 05:01:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.353 05:01:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.353 05:01:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.353 05:01:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.353 05:01:49 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.353 05:01:49 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.353 05:01:49 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.353 05:01:49 -- setup/hugepages.sh@73 -- # return 0 00:04:30.353 05:01:49 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:30.353 05:01:49 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:30.353 05:01:49 -- setup/hugepages.sh@146 -- # setup output 00:04:30.353 05:01:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.353 05:01:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.922 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.922 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.922 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:30.922 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.185 05:01:50 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:31.185 05:01:50 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:31.185 05:01:50 -- setup/hugepages.sh@89 -- # local node 00:04:31.185 05:01:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.185 05:01:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.185 05:01:50 -- setup/hugepages.sh@92 -- # local surp 00:04:31.185 05:01:50 -- setup/hugepages.sh@93 -- # local resv 00:04:31.185 05:01:50 -- setup/hugepages.sh@94 -- # local anon 00:04:31.185 05:01:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.185 05:01:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.185 05:01:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.185 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:31.185 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:31.185 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.185 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.185 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.185 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.185 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.185 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8963404 kB' 'MemAvailable: 10503204 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460992 kB' 'Inactive: 1413936 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120156 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137368 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75280 kB' 'KernelStack: 6444 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.185 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.185 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.186 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:31.186 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:31.186 05:01:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:31.186 05:01:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.186 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.186 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:31.186 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:31.186 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.186 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.186 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.186 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.186 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.186 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8963540 kB' 'MemAvailable: 10503340 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460964 kB' 'Inactive: 1413936 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 48724 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137368 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75280 kB' 'KernelStack: 6348 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.186 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.186 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.187 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.187 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.188 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:31.188 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:31.188 05:01:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:31.188 05:01:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.188 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.188 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:31.188 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:31.188 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.188 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.188 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.188 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.188 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.188 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8963540 kB' 'MemAvailable: 10503340 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460840 kB' 'Inactive: 1413936 kB' 'Active(anon): 128856 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120012 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137400 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6416 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 352456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.188 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.188 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.189 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.189 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.189 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:31.189 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:31.189 05:01:50 -- setup/hugepages.sh@100 -- # resv=0 00:04:31.189 05:01:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:31.189 nr_hugepages=512 00:04:31.189 resv_hugepages=0 00:04:31.189 05:01:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.189 surplus_hugepages=0 00:04:31.189 05:01:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.189 anon_hugepages=0 00:04:31.189 05:01:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.189 05:01:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:31.189 05:01:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:31.189 05:01:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.189 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.189 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:31.189 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:31.189 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.189 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.189 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.189 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.189 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.189 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.189 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8964312 kB' 'MemAvailable: 10504112 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 461128 kB' 'Inactive: 1413936 kB' 'Active(anon): 129144 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120328 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137384 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75296 kB' 'KernelStack: 6368 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.190 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.190 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.191 05:01:50 -- setup/common.sh@33 -- # echo 512 00:04:31.191 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:31.191 05:01:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:31.191 05:01:50 -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.191 05:01:50 -- setup/hugepages.sh@27 -- # local node 00:04:31.191 05:01:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.191 05:01:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.191 05:01:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.191 05:01:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.191 05:01:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.191 05:01:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.191 05:01:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.191 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.191 05:01:50 -- setup/common.sh@18 -- # local node=0 00:04:31.191 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:31.191 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.191 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.191 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.191 05:01:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.191 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.191 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8964336 kB' 'MemUsed: 3277628 kB' 'SwapCached: 0 kB' 'Active: 460324 kB' 'Inactive: 1413936 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1756392 kB' 'Mapped: 48612 kB' 'AnonPages: 119760 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137372 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.191 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.191 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # continue 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.192 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.192 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.192 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:31.192 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:31.192 05:01:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.192 05:01:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.192 05:01:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.192 node0=512 expecting 512 00:04:31.192 05:01:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.192 05:01:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:31.192 00:04:31.192 real 0m0.806s 00:04:31.192 user 0m0.346s 00:04:31.192 sys 0m0.517s 00:04:31.192 05:01:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.192 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.192 ************************************ 00:04:31.192 END TEST per_node_1G_alloc 00:04:31.192 ************************************ 00:04:31.192 05:01:50 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:31.192 05:01:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.192 05:01:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.192 05:01:50 -- common/autotest_common.sh@10 -- # set +x 00:04:31.192 ************************************ 00:04:31.192 START TEST even_2G_alloc 00:04:31.192 ************************************ 00:04:31.192 05:01:50 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:31.192 05:01:50 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:31.192 05:01:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.192 05:01:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.192 05:01:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.192 05:01:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.192 05:01:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.192 05:01:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.192 05:01:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.192 05:01:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.192 05:01:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.192 05:01:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:31.192 05:01:50 -- setup/hugepages.sh@83 -- # : 0 00:04:31.192 05:01:50 -- setup/hugepages.sh@84 -- # : 0 00:04:31.192 05:01:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.192 05:01:50 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:31.193 05:01:50 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:31.193 05:01:50 -- setup/hugepages.sh@153 -- # setup output 00:04:31.193 05:01:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.193 05:01:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.023 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.023 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.023 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.023 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.023 05:01:50 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:32.023 05:01:50 -- setup/hugepages.sh@89 -- # local node 00:04:32.024 05:01:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.024 05:01:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.024 05:01:50 -- setup/hugepages.sh@92 -- # local surp 00:04:32.024 05:01:50 -- setup/hugepages.sh@93 -- # local resv 00:04:32.024 05:01:50 -- setup/hugepages.sh@94 -- # local anon 00:04:32.024 05:01:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.024 05:01:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.024 05:01:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.024 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:32.024 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:32.024 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.024 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.024 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.024 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.024 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.024 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7927180 kB' 'MemAvailable: 9466980 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 461312 kB' 'Inactive: 1413936 kB' 'Active(anon): 129328 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 120212 kB' 'Mapped: 48812 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137368 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75280 kB' 'KernelStack: 6412 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.024 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.024 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.025 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:32.025 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:32.025 05:01:50 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.025 05:01:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.025 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.025 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:32.025 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:32.025 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.025 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.025 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.025 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.025 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.025 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7927180 kB' 'MemAvailable: 9466980 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460764 kB' 'Inactive: 1413936 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119944 kB' 'Mapped: 48744 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137384 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75296 kB' 'KernelStack: 6400 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.025 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.025 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.026 05:01:50 -- setup/common.sh@33 -- # echo 0 00:04:32.026 05:01:50 -- setup/common.sh@33 -- # return 0 00:04:32.026 05:01:50 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.026 05:01:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.026 05:01:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.026 05:01:50 -- setup/common.sh@18 -- # local node= 00:04:32.026 05:01:50 -- setup/common.sh@19 -- # local var val 00:04:32.026 05:01:50 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.026 05:01:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.026 05:01:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.026 05:01:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.026 05:01:50 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.026 05:01:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7926928 kB' 'MemAvailable: 9466728 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460492 kB' 'Inactive: 1413936 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137376 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75288 kB' 'KernelStack: 6368 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.026 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.026 05:01:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:50 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:50 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.027 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.027 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.028 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.028 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.028 05:01:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.028 nr_hugepages=1024 00:04:32.028 05:01:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.028 resv_hugepages=0 00:04:32.028 05:01:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.028 surplus_hugepages=0 00:04:32.028 05:01:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.028 anon_hugepages=0 00:04:32.028 05:01:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.028 05:01:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.028 05:01:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.028 05:01:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.028 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.028 05:01:51 -- setup/common.sh@18 -- # local node= 00:04:32.028 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.028 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.028 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.028 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.028 05:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.028 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.028 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7927180 kB' 'MemAvailable: 9466980 kB' 'Buffers: 2436 kB' 'Cached: 1753956 kB' 'SwapCached: 0 kB' 'Active: 460492 kB' 'Inactive: 1413936 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137376 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75288 kB' 'KernelStack: 6368 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.028 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.028 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.029 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.029 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.029 05:01:51 -- setup/common.sh@33 -- # echo 1024 00:04:32.029 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.029 05:01:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.029 05:01:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.029 05:01:51 -- setup/hugepages.sh@27 -- # local node 00:04:32.029 05:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.029 05:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.029 05:01:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.029 05:01:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.029 05:01:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.029 05:01:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.029 05:01:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.029 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.029 05:01:51 -- setup/common.sh@18 -- # local node=0 00:04:32.029 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.029 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.029 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.029 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.030 05:01:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.030 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.030 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7927180 kB' 'MemUsed: 4314784 kB' 'SwapCached: 0 kB' 'Active: 460428 kB' 'Inactive: 1413936 kB' 'Active(anon): 128444 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1756392 kB' 'Mapped: 48612 kB' 'AnonPages: 119548 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137376 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.030 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.030 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.031 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.031 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.031 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.031 05:01:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.031 05:01:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.031 05:01:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.031 05:01:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.031 05:01:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.031 node0=1024 expecting 1024 00:04:32.031 05:01:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.031 00:04:32.031 real 0m0.811s 00:04:32.031 user 0m0.353s 00:04:32.031 sys 0m0.521s 00:04:32.031 05:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.031 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:04:32.031 ************************************ 00:04:32.031 END TEST even_2G_alloc 00:04:32.031 ************************************ 00:04:32.031 05:01:51 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:32.031 05:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.031 05:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.031 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:04:32.031 ************************************ 00:04:32.031 START TEST odd_alloc 00:04:32.031 ************************************ 00:04:32.031 05:01:51 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:32.031 05:01:51 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:32.031 05:01:51 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:32.031 05:01:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.031 05:01:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.031 05:01:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:32.031 05:01:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.031 05:01:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.031 05:01:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.031 05:01:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:32.031 05:01:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.290 05:01:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.290 05:01:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.290 05:01:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.290 05:01:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.290 05:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.290 05:01:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:32.290 05:01:51 -- setup/hugepages.sh@83 -- # : 0 00:04:32.290 05:01:51 -- setup/hugepages.sh@84 -- # : 0 00:04:32.290 05:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.290 05:01:51 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:32.290 05:01:51 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:32.290 05:01:51 -- setup/hugepages.sh@160 -- # setup output 00:04:32.290 05:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.290 05:01:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.861 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.861 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.861 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.861 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.861 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.861 05:01:51 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:32.861 05:01:51 -- setup/hugepages.sh@89 -- # local node 00:04:32.861 05:01:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.861 05:01:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.861 05:01:51 -- setup/hugepages.sh@92 -- # local surp 00:04:32.861 05:01:51 -- setup/hugepages.sh@93 -- # local resv 00:04:32.861 05:01:51 -- setup/hugepages.sh@94 -- # local anon 00:04:32.861 05:01:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.861 05:01:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.861 05:01:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.861 05:01:51 -- setup/common.sh@18 -- # local node= 00:04:32.861 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.861 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.861 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.861 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.861 05:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.861 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.861 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.861 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7929080 kB' 'MemAvailable: 9468884 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 460504 kB' 'Inactive: 1413940 kB' 'Active(anon): 128520 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 48740 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137368 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75280 kB' 'KernelStack: 6364 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.862 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.862 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.863 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.863 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.863 05:01:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.863 05:01:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.863 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.863 05:01:51 -- setup/common.sh@18 -- # local node= 00:04:32.863 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.863 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.863 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.863 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.863 05:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.863 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.863 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928828 kB' 'MemAvailable: 9468632 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 460528 kB' 'Inactive: 1413940 kB' 'Active(anon): 128544 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137432 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6336 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.863 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.863 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.864 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.864 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.864 05:01:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.864 05:01:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.864 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.864 05:01:51 -- setup/common.sh@18 -- # local node= 00:04:32.864 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.864 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.864 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.864 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.864 05:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.864 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.864 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928828 kB' 'MemAvailable: 9468632 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 460468 kB' 'Inactive: 1413940 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119552 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137432 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6304 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.864 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.864 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.865 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.865 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.866 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.866 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.866 05:01:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.866 nr_hugepages=1025 00:04:32.866 05:01:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:32.866 resv_hugepages=0 00:04:32.866 05:01:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.866 surplus_hugepages=0 00:04:32.866 05:01:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.866 anon_hugepages=0 00:04:32.866 05:01:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.866 05:01:51 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.866 05:01:51 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:32.866 05:01:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.866 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.866 05:01:51 -- setup/common.sh@18 -- # local node= 00:04:32.866 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.866 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.866 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.866 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.866 05:01:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.866 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.866 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928828 kB' 'MemAvailable: 9468632 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 460456 kB' 'Inactive: 1413940 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 119800 kB' 'Mapped: 48612 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137432 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6356 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.866 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.866 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.867 05:01:51 -- setup/common.sh@33 -- # echo 1025 00:04:32.867 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.867 05:01:51 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.867 05:01:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.867 05:01:51 -- setup/hugepages.sh@27 -- # local node 00:04:32.867 05:01:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.867 05:01:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:32.867 05:01:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.867 05:01:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.867 05:01:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.867 05:01:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.867 05:01:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.867 05:01:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.867 05:01:51 -- setup/common.sh@18 -- # local node=0 00:04:32.867 05:01:51 -- setup/common.sh@19 -- # local var val 00:04:32.867 05:01:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.867 05:01:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.867 05:01:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.867 05:01:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.867 05:01:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.867 05:01:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928828 kB' 'MemUsed: 4313136 kB' 'SwapCached: 0 kB' 'Active: 460200 kB' 'Inactive: 1413940 kB' 'Active(anon): 128216 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1756396 kB' 'Mapped: 48612 kB' 'AnonPages: 119580 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137424 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.867 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.867 05:01:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # continue 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.868 05:01:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.868 05:01:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.868 05:01:51 -- setup/common.sh@33 -- # echo 0 00:04:32.868 05:01:51 -- setup/common.sh@33 -- # return 0 00:04:32.868 05:01:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.868 05:01:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.868 05:01:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.868 05:01:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.868 node0=1025 expecting 1025 00:04:32.868 05:01:51 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:32.868 05:01:51 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:32.868 00:04:32.868 real 0m0.780s 00:04:32.868 user 0m0.342s 00:04:32.868 sys 0m0.491s 00:04:32.868 05:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.868 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:04:32.868 ************************************ 00:04:32.868 END TEST odd_alloc 00:04:32.868 ************************************ 00:04:32.868 05:01:51 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:32.868 05:01:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.868 05:01:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.868 05:01:51 -- common/autotest_common.sh@10 -- # set +x 00:04:32.868 ************************************ 00:04:32.868 START TEST custom_alloc 00:04:32.868 ************************************ 00:04:32.868 05:01:51 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:32.868 05:01:51 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:32.868 05:01:51 -- setup/hugepages.sh@169 -- # local node 00:04:32.868 05:01:51 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:32.868 05:01:51 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:32.868 05:01:51 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:32.869 05:01:51 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:32.869 05:01:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.869 05:01:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.869 05:01:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.869 05:01:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.869 05:01:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.869 05:01:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.869 05:01:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.869 05:01:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.869 05:01:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.869 05:01:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:32.869 05:01:51 -- setup/hugepages.sh@83 -- # : 0 00:04:32.869 05:01:51 -- setup/hugepages.sh@84 -- # : 0 00:04:32.869 05:01:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:32.869 05:01:51 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.869 05:01:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.869 05:01:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.869 05:01:51 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:33.128 05:01:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.128 05:01:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.128 05:01:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.128 05:01:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.128 05:01:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.128 05:01:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.128 05:01:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.128 05:01:51 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:33.128 05:01:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:33.128 05:01:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:33.128 05:01:51 -- setup/hugepages.sh@78 -- # return 0 00:04:33.128 05:01:51 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:33.128 05:01:51 -- setup/hugepages.sh@187 -- # setup output 00:04:33.128 05:01:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.128 05:01:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.699 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.699 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.699 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.699 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.699 05:01:52 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:33.699 05:01:52 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:33.699 05:01:52 -- setup/hugepages.sh@89 -- # local node 00:04:33.699 05:01:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.699 05:01:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.699 05:01:52 -- setup/hugepages.sh@92 -- # local surp 00:04:33.699 05:01:52 -- setup/hugepages.sh@93 -- # local resv 00:04:33.699 05:01:52 -- setup/hugepages.sh@94 -- # local anon 00:04:33.699 05:01:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.699 05:01:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.699 05:01:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.699 05:01:52 -- setup/common.sh@18 -- # local node= 00:04:33.699 05:01:52 -- setup/common.sh@19 -- # local var val 00:04:33.699 05:01:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.699 05:01:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.699 05:01:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.699 05:01:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.699 05:01:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.699 05:01:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8974544 kB' 'MemAvailable: 10514348 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 457396 kB' 'Inactive: 1413940 kB' 'Active(anon): 125412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 116540 kB' 'Mapped: 48204 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137132 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75044 kB' 'KernelStack: 6268 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.699 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.699 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.700 05:01:52 -- setup/common.sh@33 -- # echo 0 00:04:33.700 05:01:52 -- setup/common.sh@33 -- # return 0 00:04:33.700 05:01:52 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.700 05:01:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.700 05:01:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.700 05:01:52 -- setup/common.sh@18 -- # local node= 00:04:33.700 05:01:52 -- setup/common.sh@19 -- # local var val 00:04:33.700 05:01:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.700 05:01:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.700 05:01:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.700 05:01:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.700 05:01:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.700 05:01:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8974320 kB' 'MemAvailable: 10514124 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456728 kB' 'Inactive: 1413940 kB' 'Active(anon): 124744 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 115848 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137164 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75076 kB' 'KernelStack: 6256 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.700 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.700 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.701 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.701 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.702 05:01:52 -- setup/common.sh@33 -- # echo 0 00:04:33.702 05:01:52 -- setup/common.sh@33 -- # return 0 00:04:33.702 05:01:52 -- setup/hugepages.sh@99 -- # surp=0 00:04:33.702 05:01:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.702 05:01:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.702 05:01:52 -- setup/common.sh@18 -- # local node= 00:04:33.702 05:01:52 -- setup/common.sh@19 -- # local var val 00:04:33.702 05:01:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.702 05:01:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.702 05:01:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.702 05:01:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.702 05:01:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.702 05:01:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8974320 kB' 'MemAvailable: 10514124 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456372 kB' 'Inactive: 1413940 kB' 'Active(anon): 124388 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 115792 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137152 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75064 kB' 'KernelStack: 6256 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 341428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.702 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.702 05:01:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.703 05:01:52 -- setup/common.sh@33 -- # echo 0 00:04:33.703 05:01:52 -- setup/common.sh@33 -- # return 0 00:04:33.703 05:01:52 -- setup/hugepages.sh@100 -- # resv=0 00:04:33.703 05:01:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:33.703 nr_hugepages=512 00:04:33.703 resv_hugepages=0 00:04:33.703 05:01:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.703 surplus_hugepages=0 00:04:33.703 05:01:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.703 anon_hugepages=0 00:04:33.703 05:01:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.703 05:01:52 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.703 05:01:52 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:33.703 05:01:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.703 05:01:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.703 05:01:52 -- setup/common.sh@18 -- # local node= 00:04:33.703 05:01:52 -- setup/common.sh@19 -- # local var val 00:04:33.703 05:01:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.703 05:01:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.703 05:01:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.703 05:01:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.703 05:01:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.703 05:01:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8976468 kB' 'MemAvailable: 10516272 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456712 kB' 'Inactive: 1413940 kB' 'Active(anon): 124728 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 115852 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137152 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75064 kB' 'KernelStack: 6272 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.703 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.703 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.704 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.704 05:01:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.705 05:01:52 -- setup/common.sh@33 -- # echo 512 00:04:33.705 05:01:52 -- setup/common.sh@33 -- # return 0 00:04:33.705 05:01:52 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:33.705 05:01:52 -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.705 05:01:52 -- setup/hugepages.sh@27 -- # local node 00:04:33.705 05:01:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.705 05:01:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.705 05:01:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.705 05:01:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.705 05:01:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.705 05:01:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.705 05:01:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.705 05:01:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.705 05:01:52 -- setup/common.sh@18 -- # local node=0 00:04:33.705 05:01:52 -- setup/common.sh@19 -- # local var val 00:04:33.705 05:01:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.705 05:01:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.705 05:01:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.705 05:01:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.705 05:01:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.705 05:01:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 8976468 kB' 'MemUsed: 3265496 kB' 'SwapCached: 0 kB' 'Active: 456696 kB' 'Inactive: 1413940 kB' 'Active(anon): 124712 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1756396 kB' 'Mapped: 47872 kB' 'AnonPages: 115840 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137152 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 75064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.705 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.705 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # continue 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.706 05:01:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.706 05:01:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.706 05:01:52 -- setup/common.sh@33 -- # echo 0 00:04:33.706 05:01:52 -- setup/common.sh@33 -- # return 0 00:04:33.706 05:01:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.706 05:01:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.706 05:01:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.706 05:01:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.706 node0=512 expecting 512 00:04:33.706 05:01:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.706 05:01:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.706 00:04:33.706 real 0m0.789s 00:04:33.706 user 0m0.346s 00:04:33.706 sys 0m0.501s 00:04:33.706 05:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.706 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.706 ************************************ 00:04:33.706 END TEST custom_alloc 00:04:33.706 ************************************ 00:04:33.706 05:01:52 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:33.706 05:01:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.706 05:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.706 05:01:52 -- common/autotest_common.sh@10 -- # set +x 00:04:33.706 ************************************ 00:04:33.706 START TEST no_shrink_alloc 00:04:33.706 ************************************ 00:04:33.706 05:01:52 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:33.706 05:01:52 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:33.706 05:01:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.706 05:01:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.706 05:01:52 -- setup/hugepages.sh@51 -- # shift 00:04:33.706 05:01:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.021 05:01:52 -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.021 05:01:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.021 05:01:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.021 05:01:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.021 05:01:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.021 05:01:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.021 05:01:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.021 05:01:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.021 05:01:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.021 05:01:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.021 05:01:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.021 05:01:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.021 05:01:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.021 05:01:52 -- setup/hugepages.sh@73 -- # return 0 00:04:34.021 05:01:52 -- setup/hugepages.sh@198 -- # setup output 00:04:34.021 05:01:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.021 05:01:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.305 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.305 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.305 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.305 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.566 05:01:53 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:34.566 05:01:53 -- setup/hugepages.sh@89 -- # local node 00:04:34.566 05:01:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.566 05:01:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.566 05:01:53 -- setup/hugepages.sh@92 -- # local surp 00:04:34.566 05:01:53 -- setup/hugepages.sh@93 -- # local resv 00:04:34.566 05:01:53 -- setup/hugepages.sh@94 -- # local anon 00:04:34.566 05:01:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.566 05:01:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.566 05:01:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.566 05:01:53 -- setup/common.sh@18 -- # local node= 00:04:34.566 05:01:53 -- setup/common.sh@19 -- # local var val 00:04:34.566 05:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.566 05:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.566 05:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.566 05:01:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.566 05:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.566 05:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928936 kB' 'MemAvailable: 9468740 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 457068 kB' 'Inactive: 1413940 kB' 'Active(anon): 125084 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116428 kB' 'Mapped: 47880 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137052 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74964 kB' 'KernelStack: 6324 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.566 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.566 05:01:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.567 05:01:53 -- setup/common.sh@33 -- # echo 0 00:04:34.567 05:01:53 -- setup/common.sh@33 -- # return 0 00:04:34.567 05:01:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.567 05:01:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.567 05:01:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.567 05:01:53 -- setup/common.sh@18 -- # local node= 00:04:34.567 05:01:53 -- setup/common.sh@19 -- # local var val 00:04:34.567 05:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.567 05:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.567 05:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.567 05:01:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.567 05:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.567 05:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928936 kB' 'MemAvailable: 9468740 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456688 kB' 'Inactive: 1413940 kB' 'Active(anon): 124704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116036 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137052 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74964 kB' 'KernelStack: 6312 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.567 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.567 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.568 05:01:53 -- setup/common.sh@33 -- # echo 0 00:04:34.568 05:01:53 -- setup/common.sh@33 -- # return 0 00:04:34.568 05:01:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.568 05:01:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.568 05:01:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.568 05:01:53 -- setup/common.sh@18 -- # local node= 00:04:34.568 05:01:53 -- setup/common.sh@19 -- # local var val 00:04:34.568 05:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.568 05:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.568 05:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.568 05:01:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.568 05:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.568 05:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928936 kB' 'MemAvailable: 9468740 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456684 kB' 'Inactive: 1413940 kB' 'Active(anon): 124700 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116012 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137052 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74964 kB' 'KernelStack: 6296 kB' 'PageTables: 3820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.568 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.568 05:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.569 05:01:53 -- setup/common.sh@33 -- # echo 0 00:04:34.569 05:01:53 -- setup/common.sh@33 -- # return 0 00:04:34.569 05:01:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.569 nr_hugepages=1024 00:04:34.569 05:01:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.569 resv_hugepages=0 00:04:34.569 05:01:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.569 surplus_hugepages=0 00:04:34.569 05:01:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.569 anon_hugepages=0 00:04:34.569 05:01:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.569 05:01:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.569 05:01:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.569 05:01:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.569 05:01:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.569 05:01:53 -- setup/common.sh@18 -- # local node= 00:04:34.569 05:01:53 -- setup/common.sh@19 -- # local var val 00:04:34.569 05:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.569 05:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.569 05:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.569 05:01:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.569 05:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.569 05:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928936 kB' 'MemAvailable: 9468740 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456656 kB' 'Inactive: 1413940 kB' 'Active(anon): 124672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116036 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 137052 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74964 kB' 'KernelStack: 6312 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.570 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.570 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.571 05:01:53 -- setup/common.sh@33 -- # echo 1024 00:04:34.571 05:01:53 -- setup/common.sh@33 -- # return 0 00:04:34.571 05:01:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.571 05:01:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.571 05:01:53 -- setup/hugepages.sh@27 -- # local node 00:04:34.571 05:01:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.571 05:01:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.571 05:01:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.571 05:01:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.571 05:01:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.571 05:01:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.571 05:01:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.571 05:01:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.571 05:01:53 -- setup/common.sh@18 -- # local node=0 00:04:34.571 05:01:53 -- setup/common.sh@19 -- # local var val 00:04:34.571 05:01:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.571 05:01:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.571 05:01:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.571 05:01:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.571 05:01:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.571 05:01:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.571 05:01:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7928936 kB' 'MemUsed: 4313028 kB' 'SwapCached: 0 kB' 'Active: 456688 kB' 'Inactive: 1413940 kB' 'Active(anon): 124704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756396 kB' 'Mapped: 47872 kB' 'AnonPages: 116040 kB' 'Shmem: 10472 kB' 'KernelStack: 6312 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 137052 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.571 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # continue 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 05:01:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 05:01:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 05:01:53 -- setup/common.sh@33 -- # echo 0 00:04:34.572 05:01:53 -- setup/common.sh@33 -- # return 0 00:04:34.572 05:01:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.572 05:01:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.572 05:01:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.572 05:01:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.572 node0=1024 expecting 1024 00:04:34.572 05:01:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.572 05:01:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.572 05:01:53 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:34.572 05:01:53 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:34.572 05:01:53 -- setup/hugepages.sh@202 -- # setup output 00:04:34.572 05:01:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.572 05:01:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.138 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.138 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.138 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.138 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.138 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:35.138 05:01:54 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:35.138 05:01:54 -- setup/hugepages.sh@89 -- # local node 00:04:35.138 05:01:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.138 05:01:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.138 05:01:54 -- setup/hugepages.sh@92 -- # local surp 00:04:35.138 05:01:54 -- setup/hugepages.sh@93 -- # local resv 00:04:35.138 05:01:54 -- setup/hugepages.sh@94 -- # local anon 00:04:35.138 05:01:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.138 05:01:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.138 05:01:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.138 05:01:54 -- setup/common.sh@18 -- # local node= 00:04:35.138 05:01:54 -- setup/common.sh@19 -- # local var val 00:04:35.138 05:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.138 05:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.138 05:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.138 05:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.138 05:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.138 05:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.138 05:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7937328 kB' 'MemAvailable: 9477132 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 457192 kB' 'Inactive: 1413940 kB' 'Active(anon): 125208 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116180 kB' 'Mapped: 48492 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 136916 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74828 kB' 'KernelStack: 6420 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.138 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.138 05:01:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.139 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.139 05:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.139 05:01:54 -- setup/common.sh@33 -- # echo 0 00:04:35.139 05:01:54 -- setup/common.sh@33 -- # return 0 00:04:35.139 05:01:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.139 05:01:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.139 05:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.139 05:01:54 -- setup/common.sh@18 -- # local node= 00:04:35.139 05:01:54 -- setup/common.sh@19 -- # local var val 00:04:35.139 05:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.139 05:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.139 05:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.139 05:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.140 05:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.140 05:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.140 05:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7937368 kB' 'MemAvailable: 9477172 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456516 kB' 'Inactive: 1413940 kB' 'Active(anon): 124532 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 115904 kB' 'Mapped: 48072 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 136916 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74828 kB' 'KernelStack: 6304 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:35.140 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.140 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.401 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.401 05:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.402 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.402 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.402 05:01:54 -- setup/common.sh@33 -- # echo 0 00:04:35.402 05:01:54 -- setup/common.sh@33 -- # return 0 00:04:35.402 05:01:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.402 05:01:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.402 05:01:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.402 05:01:54 -- setup/common.sh@18 -- # local node= 00:04:35.402 05:01:54 -- setup/common.sh@19 -- # local var val 00:04:35.402 05:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.402 05:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.402 05:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.402 05:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.402 05:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.402 05:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.403 05:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7937528 kB' 'MemAvailable: 9477332 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456608 kB' 'Inactive: 1413940 kB' 'Active(anon): 124624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116016 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 136912 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74824 kB' 'KernelStack: 6256 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.403 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.403 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.404 05:01:54 -- setup/common.sh@33 -- # echo 0 00:04:35.404 05:01:54 -- setup/common.sh@33 -- # return 0 00:04:35.404 05:01:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.404 05:01:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.404 nr_hugepages=1024 00:04:35.404 resv_hugepages=0 00:04:35.404 05:01:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.404 surplus_hugepages=0 00:04:35.404 05:01:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.404 anon_hugepages=0 00:04:35.404 05:01:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.404 05:01:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.404 05:01:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.404 05:01:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.404 05:01:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.404 05:01:54 -- setup/common.sh@18 -- # local node= 00:04:35.404 05:01:54 -- setup/common.sh@19 -- # local var val 00:04:35.404 05:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.404 05:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.404 05:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.404 05:01:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.404 05:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.404 05:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.404 05:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7937528 kB' 'MemAvailable: 9477332 kB' 'Buffers: 2436 kB' 'Cached: 1753960 kB' 'SwapCached: 0 kB' 'Active: 456620 kB' 'Inactive: 1413940 kB' 'Active(anon): 124636 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116024 kB' 'Mapped: 47872 kB' 'Shmem: 10472 kB' 'KReclaimable: 62088 kB' 'Slab: 136912 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74824 kB' 'KernelStack: 6256 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 334592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 167788 kB' 'DirectMap2M: 5074944 kB' 'DirectMap1G: 9437184 kB' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.404 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.404 05:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.405 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.405 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.406 05:01:54 -- setup/common.sh@33 -- # echo 1024 00:04:35.406 05:01:54 -- setup/common.sh@33 -- # return 0 00:04:35.406 05:01:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.406 05:01:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.406 05:01:54 -- setup/hugepages.sh@27 -- # local node 00:04:35.406 05:01:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.406 05:01:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.406 05:01:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.406 05:01:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.406 05:01:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.406 05:01:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.406 05:01:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.406 05:01:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.406 05:01:54 -- setup/common.sh@18 -- # local node=0 00:04:35.406 05:01:54 -- setup/common.sh@19 -- # local var val 00:04:35.406 05:01:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.406 05:01:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.406 05:01:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.406 05:01:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.406 05:01:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.406 05:01:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241964 kB' 'MemFree: 7937528 kB' 'MemUsed: 4304436 kB' 'SwapCached: 0 kB' 'Active: 456440 kB' 'Inactive: 1413940 kB' 'Active(anon): 124456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1413940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1756396 kB' 'Mapped: 47872 kB' 'AnonPages: 115844 kB' 'Shmem: 10472 kB' 'KernelStack: 6256 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62088 kB' 'Slab: 136912 kB' 'SReclaimable: 62088 kB' 'SUnreclaim: 74824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.406 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.406 05:01:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # continue 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.407 05:01:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.407 05:01:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.407 05:01:54 -- setup/common.sh@33 -- # echo 0 00:04:35.407 05:01:54 -- setup/common.sh@33 -- # return 0 00:04:35.407 05:01:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.407 05:01:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.407 05:01:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.407 05:01:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.407 node0=1024 expecting 1024 00:04:35.407 05:01:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.407 05:01:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.407 00:04:35.407 real 0m1.558s 00:04:35.407 user 0m0.693s 00:04:35.407 sys 0m0.975s 00:04:35.407 05:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.407 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.407 ************************************ 00:04:35.407 END TEST no_shrink_alloc 00:04:35.407 ************************************ 00:04:35.407 05:01:54 -- setup/hugepages.sh@217 -- # clear_hp 00:04:35.407 05:01:54 -- setup/hugepages.sh@37 -- # local node hp 00:04:35.407 05:01:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:35.407 05:01:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.407 05:01:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.407 05:01:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:35.407 05:01:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:35.407 05:01:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:35.407 05:01:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:35.407 00:04:35.407 real 0m6.812s 00:04:35.407 user 0m2.854s 00:04:35.407 sys 0m4.269s 00:04:35.407 05:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.407 ************************************ 00:04:35.407 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.407 END TEST hugepages 00:04:35.407 ************************************ 00:04:35.407 05:01:54 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:35.407 05:01:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.407 05:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.407 05:01:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.407 ************************************ 00:04:35.407 START TEST driver 00:04:35.407 ************************************ 00:04:35.407 05:01:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:35.666 * Looking for test storage... 00:04:35.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:35.666 05:01:54 -- setup/driver.sh@68 -- # setup reset 00:04:35.666 05:01:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.666 05:01:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.235 05:02:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:42.235 05:02:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.235 05:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.235 05:02:00 -- common/autotest_common.sh@10 -- # set +x 00:04:42.235 ************************************ 00:04:42.235 START TEST guess_driver 00:04:42.235 ************************************ 00:04:42.235 05:02:00 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:42.235 05:02:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:42.235 05:02:00 -- setup/driver.sh@47 -- # local fail=0 00:04:42.235 05:02:00 -- setup/driver.sh@49 -- # pick_driver 00:04:42.235 05:02:00 -- setup/driver.sh@36 -- # vfio 00:04:42.235 05:02:00 -- setup/driver.sh@21 -- # local iommu_grups 00:04:42.235 05:02:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:42.235 05:02:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:42.235 05:02:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:42.235 05:02:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:42.236 05:02:00 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:42.236 05:02:00 -- setup/driver.sh@32 -- # return 1 00:04:42.236 05:02:00 -- setup/driver.sh@38 -- # uio 00:04:42.236 05:02:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:42.236 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:42.236 05:02:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:42.236 Looking for driver=uio_pci_generic 00:04:42.236 05:02:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:42.236 05:02:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.236 05:02:00 -- setup/driver.sh@45 -- # setup output config 00:04:42.236 05:02:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.236 05:02:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.803 05:02:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:42.803 05:02:01 -- setup/driver.sh@58 -- # continue 00:04:42.803 05:02:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.062 05:02:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.062 05:02:02 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:43.062 05:02:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.062 05:02:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.062 05:02:02 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:43.062 05:02:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.062 05:02:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.062 05:02:02 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:43.062 05:02:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.062 05:02:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.062 05:02:02 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:43.062 05:02:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.321 05:02:02 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:43.321 05:02:02 -- setup/driver.sh@65 -- # setup reset 00:04:43.321 05:02:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.321 05:02:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.920 00:04:49.920 real 0m7.636s 00:04:49.920 user 0m0.898s 00:04:49.920 sys 0m1.916s 00:04:49.920 05:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.920 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:49.920 ************************************ 00:04:49.920 END TEST guess_driver 00:04:49.920 ************************************ 00:04:49.920 00:04:49.920 real 0m13.940s 00:04:49.920 user 0m1.327s 00:04:49.920 sys 0m3.005s 00:04:49.920 05:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.920 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:49.920 ************************************ 00:04:49.920 END TEST driver 00:04:49.920 ************************************ 00:04:49.920 05:02:08 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.920 05:02:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.920 05:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.920 05:02:08 -- common/autotest_common.sh@10 -- # set +x 00:04:49.920 ************************************ 00:04:49.920 START TEST devices 00:04:49.920 ************************************ 00:04:49.920 05:02:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:49.920 * Looking for test storage... 00:04:49.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.920 05:02:08 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:49.920 05:02:08 -- setup/devices.sh@192 -- # setup reset 00:04:49.920 05:02:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.920 05:02:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.854 05:02:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:50.854 05:02:09 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:50.854 05:02:09 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:50.854 05:02:09 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:50.854 05:02:09 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:04:50.854 05:02:09 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:50.854 05:02:09 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:50.854 05:02:09 -- setup/devices.sh@196 -- # blocks=() 00:04:50.854 05:02:09 -- setup/devices.sh@196 -- # declare -a blocks 00:04:50.854 05:02:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:50.854 05:02:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:50.854 05:02:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:50.854 05:02:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.854 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:50.854 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:50.854 05:02:09 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:50.855 05:02:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:50.855 05:02:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:50.855 05:02:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:50.855 No valid GPT data, bailing 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # pt= 00:04:50.855 05:02:09 -- scripts/common.sh@394 -- # return 1 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:50.855 05:02:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:50.855 05:02:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:50.855 05:02:09 -- setup/common.sh@80 -- # echo 1073741824 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:50.855 05:02:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.855 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:50.855 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:50.855 05:02:09 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:50.855 05:02:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:50.855 05:02:09 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:50.855 05:02:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:50.855 No valid GPT data, bailing 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # pt= 00:04:50.855 05:02:09 -- scripts/common.sh@394 -- # return 1 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:50.855 05:02:09 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:50.855 05:02:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:50.855 05:02:09 -- setup/common.sh@80 -- # echo 4294967296 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:50.855 05:02:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:50.855 05:02:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:50.855 05:02:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:50.855 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:50.855 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:50.855 05:02:09 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:50.855 05:02:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:50.855 05:02:09 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:50.855 05:02:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:50.855 No valid GPT data, bailing 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:50.855 05:02:09 -- scripts/common.sh@393 -- # pt= 00:04:50.855 05:02:09 -- scripts/common.sh@394 -- # return 1 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:50.855 05:02:09 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:50.855 05:02:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:50.855 05:02:09 -- setup/common.sh@80 -- # echo 4294967296 00:04:50.855 05:02:09 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:50.855 05:02:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.113 05:02:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:51.114 05:02:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.114 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:51.114 05:02:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:51.114 05:02:09 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:51.114 05:02:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:51.114 05:02:09 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:51.114 05:02:09 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:51.114 05:02:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:51.114 No valid GPT data, bailing 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # pt= 00:04:51.114 05:02:10 -- scripts/common.sh@394 -- # return 1 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:51.114 05:02:10 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:51.114 05:02:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:51.114 05:02:10 -- setup/common.sh@80 -- # echo 4294967296 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:51.114 05:02:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.114 05:02:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:51.114 05:02:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.114 05:02:10 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:51.114 05:02:10 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:51.114 05:02:10 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:51.114 05:02:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:51.114 05:02:10 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:51.114 05:02:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:51.114 No valid GPT data, bailing 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # pt= 00:04:51.114 05:02:10 -- scripts/common.sh@394 -- # return 1 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:51.114 05:02:10 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:51.114 05:02:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:51.114 05:02:10 -- setup/common.sh@80 -- # echo 6343335936 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:51.114 05:02:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.114 05:02:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:51.114 05:02:10 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:51.114 05:02:10 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:51.114 05:02:10 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:51.114 05:02:10 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:51.114 05:02:10 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:51.114 05:02:10 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:51.114 05:02:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:51.114 No valid GPT data, bailing 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:51.114 05:02:10 -- scripts/common.sh@393 -- # pt= 00:04:51.114 05:02:10 -- scripts/common.sh@394 -- # return 1 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:51.114 05:02:10 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:51.114 05:02:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:51.114 05:02:10 -- setup/common.sh@80 -- # echo 5368709120 00:04:51.114 05:02:10 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:51.114 05:02:10 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:51.114 05:02:10 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:51.114 05:02:10 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:51.114 05:02:10 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:51.114 05:02:10 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:51.114 05:02:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.114 05:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.114 05:02:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.114 ************************************ 00:04:51.114 START TEST nvme_mount 00:04:51.114 ************************************ 00:04:51.114 05:02:10 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:51.114 05:02:10 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:51.114 05:02:10 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:51.114 05:02:10 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.114 05:02:10 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:51.114 05:02:10 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:51.114 05:02:10 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:51.114 05:02:10 -- setup/common.sh@40 -- # local part_no=1 00:04:51.114 05:02:10 -- setup/common.sh@41 -- # local size=1073741824 00:04:51.114 05:02:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.114 05:02:10 -- setup/common.sh@44 -- # parts=() 00:04:51.114 05:02:10 -- setup/common.sh@44 -- # local parts 00:04:51.114 05:02:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.114 05:02:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.114 05:02:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.114 05:02:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:51.114 05:02:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.114 05:02:10 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:51.114 05:02:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:51.114 05:02:10 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:52.488 Creating new GPT entries in memory. 00:04:52.488 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.488 other utilities. 00:04:52.488 05:02:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.488 05:02:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.489 05:02:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.489 05:02:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.489 05:02:11 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:53.424 Creating new GPT entries in memory. 00:04:53.424 The operation has completed successfully. 00:04:53.424 05:02:12 -- setup/common.sh@57 -- # (( part++ )) 00:04:53.424 05:02:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.424 05:02:12 -- setup/common.sh@62 -- # wait 53978 00:04:53.424 05:02:12 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.424 05:02:12 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:53.424 05:02:12 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.424 05:02:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:53.424 05:02:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:53.424 05:02:12 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.424 05:02:12 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.424 05:02:12 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:53.424 05:02:12 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:53.424 05:02:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.424 05:02:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.424 05:02:12 -- setup/devices.sh@53 -- # local found=0 00:04:53.424 05:02:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.424 05:02:12 -- setup/devices.sh@56 -- # : 00:04:53.424 05:02:12 -- setup/devices.sh@59 -- # local pci status 00:04:53.424 05:02:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.424 05:02:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:53.424 05:02:12 -- setup/devices.sh@47 -- # setup output config 00:04:53.424 05:02:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.424 05:02:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.424 05:02:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.424 05:02:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.682 05:02:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.682 05:02:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.940 05:02:12 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.940 05:02:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:53.940 05:02:12 -- setup/devices.sh@63 -- # found=1 00:04:53.941 05:02:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.941 05:02:12 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:53.941 05:02:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.200 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.200 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.200 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.200 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.200 05:02:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.200 05:02:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:54.200 05:02:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.200 05:02:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.200 05:02:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.200 05:02:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:54.200 05:02:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.200 05:02:13 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.200 05:02:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:54.200 05:02:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:54.458 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.458 05:02:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:54.458 05:02:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:54.716 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.716 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:54.716 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:54.716 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:54.716 05:02:13 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:54.716 05:02:13 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:54.716 05:02:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.716 05:02:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:54.716 05:02:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:54.716 05:02:13 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.716 05:02:13 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.716 05:02:13 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:54.716 05:02:13 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:54.716 05:02:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.716 05:02:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:54.716 05:02:13 -- setup/devices.sh@53 -- # local found=0 00:04:54.716 05:02:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.716 05:02:13 -- setup/devices.sh@56 -- # : 00:04:54.716 05:02:13 -- setup/devices.sh@59 -- # local pci status 00:04:54.716 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.716 05:02:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:54.716 05:02:13 -- setup/devices.sh@47 -- # setup output config 00:04:54.716 05:02:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.716 05:02:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.716 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.716 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.975 05:02:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:54.975 05:02:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.234 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.234 05:02:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:55.234 05:02:14 -- setup/devices.sh@63 -- # found=1 00:04:55.234 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.234 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.234 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.492 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.492 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.492 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.492 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.493 05:02:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.493 05:02:14 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:55.493 05:02:14 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.493 05:02:14 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.493 05:02:14 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.493 05:02:14 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.493 05:02:14 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:55.493 05:02:14 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:55.493 05:02:14 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:55.493 05:02:14 -- setup/devices.sh@50 -- # local mount_point= 00:04:55.493 05:02:14 -- setup/devices.sh@51 -- # local test_file= 00:04:55.493 05:02:14 -- setup/devices.sh@53 -- # local found=0 00:04:55.493 05:02:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:55.493 05:02:14 -- setup/devices.sh@59 -- # local pci status 00:04:55.493 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.493 05:02:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:55.493 05:02:14 -- setup/devices.sh@47 -- # setup output config 00:04:55.493 05:02:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.493 05:02:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.752 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.752 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.752 05:02:14 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:55.752 05:02:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.320 05:02:15 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.320 05:02:15 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:56.320 05:02:15 -- setup/devices.sh@63 -- # found=1 00:04:56.320 05:02:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.320 05:02:15 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.320 05:02:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.320 05:02:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.320 05:02:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.580 05:02:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:56.580 05:02:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.580 05:02:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.580 05:02:15 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:56.580 05:02:15 -- setup/devices.sh@68 -- # return 0 00:04:56.580 05:02:15 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:56.580 05:02:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.580 05:02:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:56.580 05:02:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:56.580 05:02:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:56.580 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.580 00:04:56.580 real 0m5.359s 00:04:56.580 user 0m1.273s 00:04:56.580 sys 0m1.807s 00:04:56.580 05:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.580 05:02:15 -- common/autotest_common.sh@10 -- # set +x 00:04:56.580 ************************************ 00:04:56.580 END TEST nvme_mount 00:04:56.580 ************************************ 00:04:56.580 05:02:15 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:56.580 05:02:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.580 05:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.580 05:02:15 -- common/autotest_common.sh@10 -- # set +x 00:04:56.580 ************************************ 00:04:56.580 START TEST dm_mount 00:04:56.580 ************************************ 00:04:56.580 05:02:15 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:56.580 05:02:15 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:56.580 05:02:15 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:56.580 05:02:15 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:56.580 05:02:15 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:56.580 05:02:15 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:56.580 05:02:15 -- setup/common.sh@40 -- # local part_no=2 00:04:56.580 05:02:15 -- setup/common.sh@41 -- # local size=1073741824 00:04:56.580 05:02:15 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:56.580 05:02:15 -- setup/common.sh@44 -- # parts=() 00:04:56.580 05:02:15 -- setup/common.sh@44 -- # local parts 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.580 05:02:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.580 05:02:15 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part++ )) 00:04:56.580 05:02:15 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:56.580 05:02:15 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:56.580 05:02:15 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:56.580 05:02:15 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:57.958 Creating new GPT entries in memory. 00:04:57.959 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.959 other utilities. 00:04:57.959 05:02:16 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.959 05:02:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.959 05:02:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.959 05:02:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.959 05:02:16 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:58.895 Creating new GPT entries in memory. 00:04:58.895 The operation has completed successfully. 00:04:58.895 05:02:17 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.895 05:02:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.895 05:02:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.895 05:02:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.895 05:02:17 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:59.831 The operation has completed successfully. 00:04:59.831 05:02:18 -- setup/common.sh@57 -- # (( part++ )) 00:04:59.831 05:02:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.831 05:02:18 -- setup/common.sh@62 -- # wait 54616 00:04:59.831 05:02:18 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:59.831 05:02:18 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.831 05:02:18 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.831 05:02:18 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:59.831 05:02:18 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:59.831 05:02:18 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.831 05:02:18 -- setup/devices.sh@161 -- # break 00:04:59.831 05:02:18 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.831 05:02:18 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:59.831 05:02:18 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:59.831 05:02:18 -- setup/devices.sh@166 -- # dm=dm-0 00:04:59.831 05:02:18 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:59.831 05:02:18 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:59.831 05:02:18 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.831 05:02:18 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:59.831 05:02:18 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.831 05:02:18 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:59.831 05:02:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:59.831 05:02:18 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.831 05:02:18 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.831 05:02:18 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:59.831 05:02:18 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:59.832 05:02:18 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.832 05:02:18 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.832 05:02:18 -- setup/devices.sh@53 -- # local found=0 00:04:59.832 05:02:18 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.832 05:02:18 -- setup/devices.sh@56 -- # : 00:04:59.832 05:02:18 -- setup/devices.sh@59 -- # local pci status 00:04:59.832 05:02:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:59.832 05:02:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.832 05:02:18 -- setup/devices.sh@47 -- # setup output config 00:04:59.832 05:02:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.832 05:02:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.090 05:02:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.090 05:02:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.090 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.090 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.349 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.349 05:02:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:00.349 05:02:19 -- setup/devices.sh@63 -- # found=1 00:05:00.349 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.349 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.349 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.608 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.608 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.608 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.608 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.867 05:02:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.867 05:02:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:00.867 05:02:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.867 05:02:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.867 05:02:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:00.867 05:02:19 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:00.867 05:02:19 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:05:00.867 05:02:19 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:05:00.867 05:02:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:05:00.867 05:02:19 -- setup/devices.sh@50 -- # local mount_point= 00:05:00.867 05:02:19 -- setup/devices.sh@51 -- # local test_file= 00:05:00.867 05:02:19 -- setup/devices.sh@53 -- # local found=0 00:05:00.867 05:02:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.867 05:02:19 -- setup/devices.sh@59 -- # local pci status 00:05:00.867 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.867 05:02:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:05:00.867 05:02:19 -- setup/devices.sh@47 -- # setup output config 00:05:00.867 05:02:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.867 05:02:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:00.867 05:02:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:00.867 05:02:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.126 05:02:20 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.126 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.385 05:02:20 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.385 05:02:20 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:05:01.385 05:02:20 -- setup/devices.sh@63 -- # found=1 00:05:01.385 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.385 05:02:20 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.385 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.703 05:02:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.703 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.703 05:02:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:05:01.703 05:02:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.703 05:02:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.703 05:02:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.703 05:02:20 -- setup/devices.sh@68 -- # return 0 00:05:01.703 05:02:20 -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.703 05:02:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.703 05:02:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.703 05:02:20 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.703 05:02:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:01.703 05:02:20 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:05:01.962 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.962 05:02:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:01.962 05:02:20 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:05:01.962 00:05:01.962 real 0m5.188s 00:05:01.962 user 0m0.806s 00:05:01.962 sys 0m1.310s 00:05:01.962 05:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.962 ************************************ 00:05:01.962 05:02:20 -- common/autotest_common.sh@10 -- # set +x 00:05:01.962 END TEST dm_mount 00:05:01.962 ************************************ 00:05:01.962 05:02:20 -- setup/devices.sh@1 -- # cleanup 00:05:01.962 05:02:20 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.962 05:02:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.962 05:02:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:01.962 05:02:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:05:01.962 05:02:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:05:01.962 05:02:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:05:02.221 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.221 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:02.221 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.221 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:05:02.221 05:02:21 -- setup/devices.sh@12 -- # cleanup_dm 00:05:02.221 05:02:21 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.221 05:02:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:02.221 05:02:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:05:02.221 05:02:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:05:02.221 05:02:21 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:05:02.221 05:02:21 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:05:02.221 00:05:02.221 real 0m12.687s 00:05:02.221 user 0m2.968s 00:05:02.221 sys 0m4.083s 00:05:02.221 05:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.221 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.221 ************************************ 00:05:02.221 END TEST devices 00:05:02.221 ************************************ 00:05:02.221 00:05:02.221 real 0m45.914s 00:05:02.221 user 0m10.125s 00:05:02.221 sys 0m15.999s 00:05:02.221 05:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.221 ************************************ 00:05:02.221 END TEST setup.sh 00:05:02.221 05:02:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.221 ************************************ 00:05:02.221 05:02:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:02.480 Hugepages 00:05:02.480 node hugesize free / total 00:05:02.480 node0 1048576kB 0 / 0 00:05:02.480 node0 2048kB 2048 / 2048 00:05:02.480 00:05:02.480 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.480 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.739 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:02.739 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:02.739 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:02.998 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.998 05:02:21 -- spdk/autotest.sh@141 -- # uname -s 00:05:02.998 05:02:21 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:02.998 05:02:21 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:02.998 05:02:21 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.193 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.193 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.193 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.452 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.452 05:02:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.387 05:02:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.387 05:02:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.387 05:02:24 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.387 05:02:24 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.387 05:02:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.387 05:02:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.387 05:02:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.387 05:02:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.387 05:02:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.646 05:02:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:05.646 05:02:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:05.646 05:02:24 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.215 Waiting for block devices as requested 00:05:06.215 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.474 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.475 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:06.475 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:11.746 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:05:11.746 05:02:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme2 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:11.746 05:02:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:11.746 05:02:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:11.746 05:02:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1542 -- # continue 00:05:11.746 05:02:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme3 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:11.746 05:02:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:11.746 05:02:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:11.746 05:02:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1542 -- # continue 00:05:11.746 05:02:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:08.0/nvme/nvme 00:05:11.746 05:02:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:11.746 05:02:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:11.746 05:02:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:11.746 05:02:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:11.746 05:02:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:11.747 05:02:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:11.747 05:02:30 -- common/autotest_common.sh@1542 -- # continue 00:05:11.747 05:02:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:11.747 05:02:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:09.0/nvme/nvme 00:05:11.747 05:02:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:11.747 05:02:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:05:11.747 05:02:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:11.747 05:02:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:11.747 05:02:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:11.747 05:02:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:11.747 05:02:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:11.747 05:02:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:11.747 05:02:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:11.747 05:02:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:11.747 05:02:30 -- common/autotest_common.sh@1542 -- # continue 00:05:11.747 05:02:30 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:11.747 05:02:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.747 05:02:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.747 05:02:30 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:11.747 05:02:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.747 05:02:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.747 05:02:30 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.121 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.121 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.121 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.121 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.121 05:02:32 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:13.121 05:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.121 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.379 05:02:32 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:13.379 05:02:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:13.379 05:02:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.380 05:02:32 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:13.380 05:02:32 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:13.380 05:02:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.380 05:02:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.380 05:02:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.380 05:02:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.380 05:02:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.380 05:02:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:05:13.380 05:02:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.380 05:02:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.380 05:02:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.380 05:02:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.380 05:02:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.380 05:02:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.380 05:02:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:05:13.380 05:02:32 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:13.380 05:02:32 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.380 05:02:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:13.380 05:02:32 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:13.380 05:02:32 -- common/autotest_common.sh@1578 -- # return 0 00:05:13.380 05:02:32 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:13.380 05:02:32 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:13.380 05:02:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.380 05:02:32 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:13.380 05:02:32 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:13.380 05:02:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:13.380 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.380 05:02:32 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.380 05:02:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.380 05:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.380 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.380 ************************************ 00:05:13.380 START TEST env 00:05:13.380 ************************************ 00:05:13.380 05:02:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:13.638 * Looking for test storage... 00:05:13.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:13.638 05:02:32 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.638 05:02:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.638 05:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.638 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.638 ************************************ 00:05:13.638 START TEST env_memory 00:05:13.638 ************************************ 00:05:13.638 05:02:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:13.638 00:05:13.638 00:05:13.638 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.638 http://cunit.sourceforge.net/ 00:05:13.638 00:05:13.638 00:05:13.638 Suite: memory 00:05:13.638 Test: alloc and free memory map ...[2024-07-26 05:02:32.599251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.638 passed 00:05:13.638 Test: mem map translation ...[2024-07-26 05:02:32.667608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.638 [2024-07-26 05:02:32.667843] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.638 [2024-07-26 05:02:32.668103] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.638 [2024-07-26 05:02:32.668345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.896 passed 00:05:13.896 Test: mem map registration ...[2024-07-26 05:02:32.773348] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.896 [2024-07-26 05:02:32.773424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.896 passed 00:05:13.896 Test: mem map adjacent registrations ...passed 00:05:13.896 00:05:13.896 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.896 suites 1 1 n/a 0 0 00:05:13.896 tests 4 4 4 0 0 00:05:13.896 asserts 152 152 152 0 n/a 00:05:13.896 00:05:13.896 Elapsed time = 0.371 seconds 00:05:13.896 00:05:13.896 real 0m0.419s 00:05:13.896 user 0m0.375s 00:05:13.896 sys 0m0.036s 00:05:13.896 05:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.896 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.896 ************************************ 00:05:13.896 END TEST env_memory 00:05:13.896 ************************************ 00:05:13.896 05:02:32 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:13.896 05:02:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.896 05:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.896 05:02:32 -- common/autotest_common.sh@10 -- # set +x 00:05:13.896 ************************************ 00:05:13.896 START TEST env_vtophys 00:05:13.896 ************************************ 00:05:13.896 05:02:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.155 EAL: lib.eal log level changed from notice to debug 00:05:14.155 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.155 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.155 EAL: Maximum logical cores by configuration: 128 00:05:14.155 EAL: Detected CPU lcores: 10 00:05:14.155 EAL: Detected NUMA nodes: 1 00:05:14.155 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:14.155 EAL: Detected shared linkage of DPDK 00:05:14.155 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.155 EAL: Selected IOVA mode 'PA' 00:05:14.155 EAL: Probing VFIO support... 00:05:14.155 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.155 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.155 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.155 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.155 EAL: Setting up physically contiguous memory... 00:05:14.155 EAL: Setting maximum number of open files to 524288 00:05:14.155 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.155 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.155 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.155 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.155 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.155 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.155 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.155 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.155 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.155 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.155 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.155 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.155 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.155 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.155 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.155 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.155 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.155 EAL: Hugepages will be freed exactly as allocated. 00:05:14.155 EAL: No shared files mode enabled, IPC is disabled 00:05:14.155 EAL: No shared files mode enabled, IPC is disabled 00:05:14.155 EAL: TSC frequency is ~2100000 KHz 00:05:14.155 EAL: Main lcore 0 is ready (tid=7ffa6c6f3a40;cpuset=[0]) 00:05:14.155 EAL: Trying to obtain current memory policy. 00:05:14.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.155 EAL: Restoring previous memory policy: 0 00:05:14.155 EAL: request: mp_malloc_sync 00:05:14.155 EAL: No shared files mode enabled, IPC is disabled 00:05:14.155 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.155 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.155 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.155 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.155 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.155 00:05:14.155 00:05:14.155 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.155 http://cunit.sourceforge.net/ 00:05:14.155 00:05:14.155 00:05:14.155 Suite: components_suite 00:05:14.721 Test: vtophys_malloc_test ...passed 00:05:14.721 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.721 EAL: Restoring previous memory policy: 4 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.721 EAL: Trying to obtain current memory policy. 00:05:14.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.721 EAL: Restoring previous memory policy: 4 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.721 EAL: Trying to obtain current memory policy. 00:05:14.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.721 EAL: Restoring previous memory policy: 4 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.721 EAL: Trying to obtain current memory policy. 00:05:14.721 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.721 EAL: Restoring previous memory policy: 4 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.721 EAL: request: mp_malloc_sync 00:05:14.721 EAL: No shared files mode enabled, IPC is disabled 00:05:14.721 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.978 EAL: Trying to obtain current memory policy. 00:05:14.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.978 EAL: Restoring previous memory policy: 4 00:05:14.978 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.978 EAL: request: mp_malloc_sync 00:05:14.978 EAL: No shared files mode enabled, IPC is disabled 00:05:14.978 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.978 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.978 EAL: request: mp_malloc_sync 00:05:14.978 EAL: No shared files mode enabled, IPC is disabled 00:05:14.978 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.978 EAL: Trying to obtain current memory policy. 00:05:14.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.978 EAL: Restoring previous memory policy: 4 00:05:14.978 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.978 EAL: request: mp_malloc_sync 00:05:14.978 EAL: No shared files mode enabled, IPC is disabled 00:05:14.978 EAL: Heap on socket 0 was expanded by 66MB 00:05:15.236 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.236 EAL: request: mp_malloc_sync 00:05:15.236 EAL: No shared files mode enabled, IPC is disabled 00:05:15.236 EAL: Heap on socket 0 was shrunk by 66MB 00:05:15.236 EAL: Trying to obtain current memory policy. 00:05:15.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.236 EAL: Restoring previous memory policy: 4 00:05:15.236 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.236 EAL: request: mp_malloc_sync 00:05:15.236 EAL: No shared files mode enabled, IPC is disabled 00:05:15.236 EAL: Heap on socket 0 was expanded by 130MB 00:05:15.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.494 EAL: request: mp_malloc_sync 00:05:15.494 EAL: No shared files mode enabled, IPC is disabled 00:05:15.494 EAL: Heap on socket 0 was shrunk by 130MB 00:05:15.752 EAL: Trying to obtain current memory policy. 00:05:15.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.752 EAL: Restoring previous memory policy: 4 00:05:15.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.752 EAL: request: mp_malloc_sync 00:05:15.752 EAL: No shared files mode enabled, IPC is disabled 00:05:15.752 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.319 EAL: request: mp_malloc_sync 00:05:16.319 EAL: No shared files mode enabled, IPC is disabled 00:05:16.319 EAL: Heap on socket 0 was shrunk by 258MB 00:05:16.886 EAL: Trying to obtain current memory policy. 00:05:16.886 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.886 EAL: Restoring previous memory policy: 4 00:05:16.886 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.886 EAL: request: mp_malloc_sync 00:05:16.886 EAL: No shared files mode enabled, IPC is disabled 00:05:16.886 EAL: Heap on socket 0 was expanded by 514MB 00:05:18.262 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.262 EAL: request: mp_malloc_sync 00:05:18.262 EAL: No shared files mode enabled, IPC is disabled 00:05:18.262 EAL: Heap on socket 0 was shrunk by 514MB 00:05:18.895 EAL: Trying to obtain current memory policy. 00:05:18.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.154 EAL: Restoring previous memory policy: 4 00:05:19.154 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.154 EAL: request: mp_malloc_sync 00:05:19.154 EAL: No shared files mode enabled, IPC is disabled 00:05:19.154 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.687 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.687 EAL: request: mp_malloc_sync 00:05:21.687 EAL: No shared files mode enabled, IPC is disabled 00:05:21.687 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:23.064 passed 00:05:23.064 00:05:23.064 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.064 suites 1 1 n/a 0 0 00:05:23.064 tests 2 2 2 0 0 00:05:23.064 asserts 5390 5390 5390 0 n/a 00:05:23.064 00:05:23.064 Elapsed time = 8.580 seconds 00:05:23.064 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.064 EAL: request: mp_malloc_sync 00:05:23.064 EAL: No shared files mode enabled, IPC is disabled 00:05:23.064 EAL: Heap on socket 0 was shrunk by 2MB 00:05:23.064 EAL: No shared files mode enabled, IPC is disabled 00:05:23.064 EAL: No shared files mode enabled, IPC is disabled 00:05:23.064 EAL: No shared files mode enabled, IPC is disabled 00:05:23.064 00:05:23.064 real 0m8.932s 00:05:23.064 user 0m7.888s 00:05:23.064 sys 0m0.878s 00:05:23.064 ************************************ 00:05:23.064 05:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.064 05:02:41 -- common/autotest_common.sh@10 -- # set +x 00:05:23.064 END TEST env_vtophys 00:05:23.065 ************************************ 00:05:23.065 05:02:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.065 05:02:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.065 05:02:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.065 05:02:41 -- common/autotest_common.sh@10 -- # set +x 00:05:23.065 ************************************ 00:05:23.065 START TEST env_pci 00:05:23.065 ************************************ 00:05:23.065 05:02:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:23.065 00:05:23.065 00:05:23.065 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.065 http://cunit.sourceforge.net/ 00:05:23.065 00:05:23.065 00:05:23.065 Suite: pci 00:05:23.065 Test: pci_hook ...[2024-07-26 05:02:42.028796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56370 has claimed it 00:05:23.065 passed 00:05:23.065 00:05:23.065 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.065 suites 1 1 n/a 0 0 00:05:23.065 tests 1 1 1 0 0 00:05:23.065 asserts 25 25 25 0 n/a 00:05:23.065 00:05:23.065 Elapsed time = 0.007 seconds 00:05:23.065 EAL: Cannot find device (10000:00:01.0) 00:05:23.065 EAL: Failed to attach device on primary process 00:05:23.065 ************************************ 00:05:23.065 END TEST env_pci 00:05:23.065 00:05:23.065 real 0m0.088s 00:05:23.065 user 0m0.043s 00:05:23.065 sys 0m0.045s 00:05:23.065 05:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.065 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.065 ************************************ 00:05:23.065 05:02:42 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:23.065 05:02:42 -- env/env.sh@15 -- # uname 00:05:23.065 05:02:42 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:23.065 05:02:42 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:23.065 05:02:42 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.065 05:02:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:23.065 05:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.065 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.065 ************************************ 00:05:23.065 START TEST env_dpdk_post_init 00:05:23.065 ************************************ 00:05:23.065 05:02:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:23.324 EAL: Detected CPU lcores: 10 00:05:23.324 EAL: Detected NUMA nodes: 1 00:05:23.324 EAL: Detected shared linkage of DPDK 00:05:23.324 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.324 EAL: Selected IOVA mode 'PA' 00:05:23.324 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.324 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:23.324 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:23.324 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:05:23.324 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:05:23.582 Starting DPDK initialization... 00:05:23.582 Starting SPDK post initialization... 00:05:23.582 SPDK NVMe probe 00:05:23.582 Attaching to 0000:00:06.0 00:05:23.582 Attaching to 0000:00:07.0 00:05:23.582 Attaching to 0000:00:08.0 00:05:23.582 Attaching to 0000:00:09.0 00:05:23.582 Attached to 0000:00:06.0 00:05:23.582 Attached to 0000:00:07.0 00:05:23.582 Attached to 0000:00:09.0 00:05:23.582 Attached to 0000:00:08.0 00:05:23.582 Cleaning up... 00:05:23.582 00:05:23.582 real 0m0.312s 00:05:23.582 user 0m0.103s 00:05:23.582 sys 0m0.111s 00:05:23.582 05:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.582 ************************************ 00:05:23.582 END TEST env_dpdk_post_init 00:05:23.582 ************************************ 00:05:23.582 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.582 05:02:42 -- env/env.sh@26 -- # uname 00:05:23.582 05:02:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:23.582 05:02:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.582 05:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.582 05:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.582 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.582 ************************************ 00:05:23.582 START TEST env_mem_callbacks 00:05:23.582 ************************************ 00:05:23.582 05:02:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:23.582 EAL: Detected CPU lcores: 10 00:05:23.582 EAL: Detected NUMA nodes: 1 00:05:23.583 EAL: Detected shared linkage of DPDK 00:05:23.583 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.583 EAL: Selected IOVA mode 'PA' 00:05:23.841 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.841 00:05:23.841 00:05:23.841 CUnit - A unit testing framework for C - Version 2.1-3 00:05:23.841 http://cunit.sourceforge.net/ 00:05:23.841 00:05:23.841 00:05:23.841 Suite: memory 00:05:23.841 Test: test ... 00:05:23.841 register 0x200000200000 2097152 00:05:23.841 malloc 3145728 00:05:23.841 register 0x200000400000 4194304 00:05:23.841 buf 0x2000004fffc0 len 3145728 PASSED 00:05:23.841 malloc 64 00:05:23.841 buf 0x2000004ffec0 len 64 PASSED 00:05:23.841 malloc 4194304 00:05:23.841 register 0x200000800000 6291456 00:05:23.841 buf 0x2000009fffc0 len 4194304 PASSED 00:05:23.841 free 0x2000004fffc0 3145728 00:05:23.841 free 0x2000004ffec0 64 00:05:23.841 unregister 0x200000400000 4194304 PASSED 00:05:23.841 free 0x2000009fffc0 4194304 00:05:23.841 unregister 0x200000800000 6291456 PASSED 00:05:23.841 malloc 8388608 00:05:23.841 register 0x200000400000 10485760 00:05:23.841 buf 0x2000005fffc0 len 8388608 PASSED 00:05:23.841 free 0x2000005fffc0 8388608 00:05:23.841 unregister 0x200000400000 10485760 PASSED 00:05:23.841 passed 00:05:23.841 00:05:23.841 Run Summary: Type Total Ran Passed Failed Inactive 00:05:23.841 suites 1 1 n/a 0 0 00:05:23.841 tests 1 1 1 0 0 00:05:23.841 asserts 15 15 15 0 n/a 00:05:23.841 00:05:23.841 Elapsed time = 0.073 seconds 00:05:23.841 00:05:23.841 real 0m0.296s 00:05:23.841 user 0m0.117s 00:05:23.841 sys 0m0.075s 00:05:23.841 05:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.841 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.841 ************************************ 00:05:23.841 END TEST env_mem_callbacks 00:05:23.841 ************************************ 00:05:23.841 00:05:23.842 real 0m10.475s 00:05:23.842 user 0m8.661s 00:05:23.842 sys 0m1.430s 00:05:23.842 05:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.842 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.842 ************************************ 00:05:23.842 END TEST env 00:05:23.842 ************************************ 00:05:23.842 05:02:42 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:23.842 05:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.842 05:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.842 05:02:42 -- common/autotest_common.sh@10 -- # set +x 00:05:23.842 ************************************ 00:05:23.842 START TEST rpc 00:05:23.842 ************************************ 00:05:23.842 05:02:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:24.101 * Looking for test storage... 00:05:24.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:24.101 05:02:43 -- rpc/rpc.sh@65 -- # spdk_pid=56488 00:05:24.101 05:02:43 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.101 05:02:43 -- rpc/rpc.sh@67 -- # waitforlisten 56488 00:05:24.101 05:02:43 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:24.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.101 05:02:43 -- common/autotest_common.sh@819 -- # '[' -z 56488 ']' 00:05:24.101 05:02:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.101 05:02:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.101 05:02:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.101 05:02:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.101 05:02:43 -- common/autotest_common.sh@10 -- # set +x 00:05:24.101 [2024-07-26 05:02:43.178648] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:24.101 [2024-07-26 05:02:43.179021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56488 ] 00:05:24.360 [2024-07-26 05:02:43.368961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.619 [2024-07-26 05:02:43.705240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.619 [2024-07-26 05:02:43.705553] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:24.619 [2024-07-26 05:02:43.705614] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56488' to capture a snapshot of events at runtime. 00:05:24.619 [2024-07-26 05:02:43.705722] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56488 for offline analysis/debug. 00:05:24.619 [2024-07-26 05:02:43.705861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.998 05:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.998 05:02:44 -- common/autotest_common.sh@852 -- # return 0 00:05:25.998 05:02:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.998 05:02:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:25.998 05:02:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:25.998 05:02:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:25.998 05:02:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.998 05:02:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 ************************************ 00:05:25.998 START TEST rpc_integrity 00:05:25.998 ************************************ 00:05:25.998 05:02:44 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:25.998 05:02:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:25.998 05:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 05:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.998 05:02:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:25.998 05:02:44 -- rpc/rpc.sh@13 -- # jq length 00:05:25.998 05:02:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:25.998 05:02:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:25.998 05:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 05:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.998 05:02:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:25.998 05:02:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:25.998 05:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 05:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.998 05:02:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:25.998 { 00:05:25.998 "name": "Malloc0", 00:05:25.998 "aliases": [ 00:05:25.998 "d9e68ffd-a94e-4a20-b32d-0f425ed84a52" 00:05:25.998 ], 00:05:25.998 "product_name": "Malloc disk", 00:05:25.998 "block_size": 512, 00:05:25.998 "num_blocks": 16384, 00:05:25.998 "uuid": "d9e68ffd-a94e-4a20-b32d-0f425ed84a52", 00:05:25.998 "assigned_rate_limits": { 00:05:25.998 "rw_ios_per_sec": 0, 00:05:25.998 "rw_mbytes_per_sec": 0, 00:05:25.998 "r_mbytes_per_sec": 0, 00:05:25.998 "w_mbytes_per_sec": 0 00:05:25.998 }, 00:05:25.998 "claimed": false, 00:05:25.998 "zoned": false, 00:05:25.998 "supported_io_types": { 00:05:25.998 "read": true, 00:05:25.998 "write": true, 00:05:25.998 "unmap": true, 00:05:25.998 "write_zeroes": true, 00:05:25.998 "flush": true, 00:05:25.998 "reset": true, 00:05:25.998 "compare": false, 00:05:25.998 "compare_and_write": false, 00:05:25.998 "abort": true, 00:05:25.998 "nvme_admin": false, 00:05:25.998 "nvme_io": false 00:05:25.998 }, 00:05:25.998 "memory_domains": [ 00:05:25.998 { 00:05:25.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.998 "dma_device_type": 2 00:05:25.998 } 00:05:25.998 ], 00:05:25.998 "driver_specific": {} 00:05:25.998 } 00:05:25.998 ]' 00:05:25.998 05:02:44 -- rpc/rpc.sh@17 -- # jq length 00:05:25.998 05:02:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:25.998 05:02:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:25.998 05:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 [2024-07-26 05:02:44.962001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:25.998 [2024-07-26 05:02:44.962064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.998 [2024-07-26 05:02:44.962094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:25.998 [2024-07-26 05:02:44.962117] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.998 [2024-07-26 05:02:44.964897] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.998 [2024-07-26 05:02:44.965055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:25.998 Passthru0 00:05:25.998 05:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.998 05:02:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:25.998 05:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.998 05:02:44 -- common/autotest_common.sh@10 -- # set +x 00:05:25.998 05:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.998 05:02:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:25.998 { 00:05:25.998 "name": "Malloc0", 00:05:25.998 "aliases": [ 00:05:25.998 "d9e68ffd-a94e-4a20-b32d-0f425ed84a52" 00:05:25.998 ], 00:05:25.998 "product_name": "Malloc disk", 00:05:25.998 "block_size": 512, 00:05:25.998 "num_blocks": 16384, 00:05:25.998 "uuid": "d9e68ffd-a94e-4a20-b32d-0f425ed84a52", 00:05:25.998 "assigned_rate_limits": { 00:05:25.998 "rw_ios_per_sec": 0, 00:05:25.998 "rw_mbytes_per_sec": 0, 00:05:25.998 "r_mbytes_per_sec": 0, 00:05:25.998 "w_mbytes_per_sec": 0 00:05:25.998 }, 00:05:25.998 "claimed": true, 00:05:25.998 "claim_type": "exclusive_write", 00:05:25.998 "zoned": false, 00:05:25.998 "supported_io_types": { 00:05:25.998 "read": true, 00:05:25.998 "write": true, 00:05:25.998 "unmap": true, 00:05:25.998 "write_zeroes": true, 00:05:25.998 "flush": true, 00:05:25.998 "reset": true, 00:05:25.998 "compare": false, 00:05:25.999 "compare_and_write": false, 00:05:25.999 "abort": true, 00:05:25.999 "nvme_admin": false, 00:05:25.999 "nvme_io": false 00:05:25.999 }, 00:05:25.999 "memory_domains": [ 00:05:25.999 { 00:05:25.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.999 "dma_device_type": 2 00:05:25.999 } 00:05:25.999 ], 00:05:25.999 "driver_specific": {} 00:05:25.999 }, 00:05:25.999 { 00:05:25.999 "name": "Passthru0", 00:05:25.999 "aliases": [ 00:05:25.999 "05e72d3b-af8e-5dde-b332-c9973973044c" 00:05:25.999 ], 00:05:25.999 "product_name": "passthru", 00:05:25.999 "block_size": 512, 00:05:25.999 "num_blocks": 16384, 00:05:25.999 "uuid": "05e72d3b-af8e-5dde-b332-c9973973044c", 00:05:25.999 "assigned_rate_limits": { 00:05:25.999 "rw_ios_per_sec": 0, 00:05:25.999 "rw_mbytes_per_sec": 0, 00:05:25.999 "r_mbytes_per_sec": 0, 00:05:25.999 "w_mbytes_per_sec": 0 00:05:25.999 }, 00:05:25.999 "claimed": false, 00:05:25.999 "zoned": false, 00:05:25.999 "supported_io_types": { 00:05:25.999 "read": true, 00:05:25.999 "write": true, 00:05:25.999 "unmap": true, 00:05:25.999 "write_zeroes": true, 00:05:25.999 "flush": true, 00:05:25.999 "reset": true, 00:05:25.999 "compare": false, 00:05:25.999 "compare_and_write": false, 00:05:25.999 "abort": true, 00:05:25.999 "nvme_admin": false, 00:05:25.999 "nvme_io": false 00:05:25.999 }, 00:05:25.999 "memory_domains": [ 00:05:25.999 { 00:05:25.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:25.999 "dma_device_type": 2 00:05:25.999 } 00:05:25.999 ], 00:05:25.999 "driver_specific": { 00:05:25.999 "passthru": { 00:05:25.999 "name": "Passthru0", 00:05:25.999 "base_bdev_name": "Malloc0" 00:05:25.999 } 00:05:25.999 } 00:05:25.999 } 00:05:25.999 ]' 00:05:25.999 05:02:44 -- rpc/rpc.sh@21 -- # jq length 00:05:25.999 05:02:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:25.999 05:02:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:25.999 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.999 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:25.999 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.999 05:02:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:25.999 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.999 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:25.999 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.999 05:02:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:25.999 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.999 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:25.999 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.999 05:02:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:25.999 05:02:45 -- rpc/rpc.sh@26 -- # jq length 00:05:26.258 ************************************ 00:05:26.258 END TEST rpc_integrity 00:05:26.258 ************************************ 00:05:26.258 05:02:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.258 00:05:26.258 real 0m0.347s 00:05:26.258 user 0m0.199s 00:05:26.258 sys 0m0.050s 00:05:26.258 05:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.258 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 05:02:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.258 05:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.258 05:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.258 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 ************************************ 00:05:26.258 START TEST rpc_plugins 00:05:26.258 ************************************ 00:05:26.258 05:02:45 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:26.258 05:02:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.258 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.258 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.258 05:02:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.258 05:02:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.258 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.258 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.258 05:02:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.258 { 00:05:26.258 "name": "Malloc1", 00:05:26.258 "aliases": [ 00:05:26.258 "aeabc949-de0c-4b9c-9105-fefea53a329a" 00:05:26.258 ], 00:05:26.258 "product_name": "Malloc disk", 00:05:26.258 "block_size": 4096, 00:05:26.258 "num_blocks": 256, 00:05:26.258 "uuid": "aeabc949-de0c-4b9c-9105-fefea53a329a", 00:05:26.258 "assigned_rate_limits": { 00:05:26.258 "rw_ios_per_sec": 0, 00:05:26.258 "rw_mbytes_per_sec": 0, 00:05:26.258 "r_mbytes_per_sec": 0, 00:05:26.258 "w_mbytes_per_sec": 0 00:05:26.258 }, 00:05:26.258 "claimed": false, 00:05:26.258 "zoned": false, 00:05:26.258 "supported_io_types": { 00:05:26.258 "read": true, 00:05:26.258 "write": true, 00:05:26.258 "unmap": true, 00:05:26.258 "write_zeroes": true, 00:05:26.258 "flush": true, 00:05:26.258 "reset": true, 00:05:26.258 "compare": false, 00:05:26.258 "compare_and_write": false, 00:05:26.258 "abort": true, 00:05:26.258 "nvme_admin": false, 00:05:26.258 "nvme_io": false 00:05:26.258 }, 00:05:26.258 "memory_domains": [ 00:05:26.258 { 00:05:26.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.258 "dma_device_type": 2 00:05:26.258 } 00:05:26.258 ], 00:05:26.258 "driver_specific": {} 00:05:26.258 } 00:05:26.258 ]' 00:05:26.258 05:02:45 -- rpc/rpc.sh@32 -- # jq length 00:05:26.258 05:02:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:26.258 05:02:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:26.258 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.258 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.258 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.258 05:02:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:26.258 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.259 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.259 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.259 05:02:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:26.259 05:02:45 -- rpc/rpc.sh@36 -- # jq length 00:05:26.259 ************************************ 00:05:26.259 END TEST rpc_plugins 00:05:26.259 ************************************ 00:05:26.259 05:02:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:26.259 00:05:26.259 real 0m0.152s 00:05:26.259 user 0m0.086s 00:05:26.259 sys 0m0.023s 00:05:26.259 05:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.259 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.517 05:02:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:26.517 05:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.517 05:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.517 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.517 ************************************ 00:05:26.517 START TEST rpc_trace_cmd_test 00:05:26.517 ************************************ 00:05:26.517 05:02:45 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:26.517 05:02:45 -- rpc/rpc.sh@40 -- # local info 00:05:26.518 05:02:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:26.518 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.518 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.518 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.518 05:02:45 -- rpc/rpc.sh@42 -- # info='{ 00:05:26.518 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56488", 00:05:26.518 "tpoint_group_mask": "0x8", 00:05:26.518 "iscsi_conn": { 00:05:26.518 "mask": "0x2", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "scsi": { 00:05:26.518 "mask": "0x4", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "bdev": { 00:05:26.518 "mask": "0x8", 00:05:26.518 "tpoint_mask": "0xffffffffffffffff" 00:05:26.518 }, 00:05:26.518 "nvmf_rdma": { 00:05:26.518 "mask": "0x10", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "nvmf_tcp": { 00:05:26.518 "mask": "0x20", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "ftl": { 00:05:26.518 "mask": "0x40", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "blobfs": { 00:05:26.518 "mask": "0x80", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "dsa": { 00:05:26.518 "mask": "0x200", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "thread": { 00:05:26.518 "mask": "0x400", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "nvme_pcie": { 00:05:26.518 "mask": "0x800", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "iaa": { 00:05:26.518 "mask": "0x1000", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "nvme_tcp": { 00:05:26.518 "mask": "0x2000", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 }, 00:05:26.518 "bdev_nvme": { 00:05:26.518 "mask": "0x4000", 00:05:26.518 "tpoint_mask": "0x0" 00:05:26.518 } 00:05:26.518 }' 00:05:26.518 05:02:45 -- rpc/rpc.sh@43 -- # jq length 00:05:26.518 05:02:45 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:26.518 05:02:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:26.518 05:02:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:26.518 05:02:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:26.518 05:02:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:26.518 05:02:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:26.776 05:02:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:26.776 05:02:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:26.776 ************************************ 00:05:26.776 END TEST rpc_trace_cmd_test 00:05:26.776 ************************************ 00:05:26.776 05:02:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:26.776 00:05:26.776 real 0m0.249s 00:05:26.776 user 0m0.208s 00:05:26.776 sys 0m0.032s 00:05:26.776 05:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.776 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 05:02:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:26.776 05:02:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:26.776 05:02:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:26.776 05:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.776 05:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.776 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 ************************************ 00:05:26.776 START TEST rpc_daemon_integrity 00:05:26.776 ************************************ 00:05:26.776 05:02:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:26.776 05:02:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.776 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.776 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.776 05:02:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.776 05:02:45 -- rpc/rpc.sh@13 -- # jq length 00:05:26.776 05:02:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.776 05:02:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.776 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.776 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.776 05:02:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:26.776 05:02:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.776 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.776 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:26.776 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.776 05:02:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.776 { 00:05:26.776 "name": "Malloc2", 00:05:26.776 "aliases": [ 00:05:26.776 "caa208b0-66f0-4483-b532-3b9672851b9d" 00:05:26.776 ], 00:05:26.776 "product_name": "Malloc disk", 00:05:26.776 "block_size": 512, 00:05:26.776 "num_blocks": 16384, 00:05:26.776 "uuid": "caa208b0-66f0-4483-b532-3b9672851b9d", 00:05:26.776 "assigned_rate_limits": { 00:05:26.776 "rw_ios_per_sec": 0, 00:05:26.776 "rw_mbytes_per_sec": 0, 00:05:26.776 "r_mbytes_per_sec": 0, 00:05:26.776 "w_mbytes_per_sec": 0 00:05:26.776 }, 00:05:26.776 "claimed": false, 00:05:26.776 "zoned": false, 00:05:26.776 "supported_io_types": { 00:05:26.776 "read": true, 00:05:26.776 "write": true, 00:05:26.776 "unmap": true, 00:05:26.776 "write_zeroes": true, 00:05:26.776 "flush": true, 00:05:26.776 "reset": true, 00:05:26.776 "compare": false, 00:05:26.776 "compare_and_write": false, 00:05:26.776 "abort": true, 00:05:26.776 "nvme_admin": false, 00:05:26.776 "nvme_io": false 00:05:26.776 }, 00:05:26.776 "memory_domains": [ 00:05:26.776 { 00:05:26.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.776 "dma_device_type": 2 00:05:26.776 } 00:05:26.776 ], 00:05:26.776 "driver_specific": {} 00:05:26.776 } 00:05:26.776 ]' 00:05:26.776 05:02:45 -- rpc/rpc.sh@17 -- # jq length 00:05:27.035 05:02:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.035 05:02:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.035 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.035 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 [2024-07-26 05:02:45.897708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.035 [2024-07-26 05:02:45.897770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.035 [2024-07-26 05:02:45.897792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:27.035 [2024-07-26 05:02:45.897807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.035 [2024-07-26 05:02:45.900356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.035 [2024-07-26 05:02:45.900399] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.035 Passthru0 00:05:27.035 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.035 05:02:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.035 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.035 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.035 05:02:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.035 { 00:05:27.035 "name": "Malloc2", 00:05:27.035 "aliases": [ 00:05:27.035 "caa208b0-66f0-4483-b532-3b9672851b9d" 00:05:27.035 ], 00:05:27.035 "product_name": "Malloc disk", 00:05:27.035 "block_size": 512, 00:05:27.035 "num_blocks": 16384, 00:05:27.035 "uuid": "caa208b0-66f0-4483-b532-3b9672851b9d", 00:05:27.035 "assigned_rate_limits": { 00:05:27.035 "rw_ios_per_sec": 0, 00:05:27.035 "rw_mbytes_per_sec": 0, 00:05:27.035 "r_mbytes_per_sec": 0, 00:05:27.035 "w_mbytes_per_sec": 0 00:05:27.035 }, 00:05:27.035 "claimed": true, 00:05:27.035 "claim_type": "exclusive_write", 00:05:27.035 "zoned": false, 00:05:27.035 "supported_io_types": { 00:05:27.035 "read": true, 00:05:27.035 "write": true, 00:05:27.035 "unmap": true, 00:05:27.035 "write_zeroes": true, 00:05:27.035 "flush": true, 00:05:27.035 "reset": true, 00:05:27.035 "compare": false, 00:05:27.035 "compare_and_write": false, 00:05:27.035 "abort": true, 00:05:27.035 "nvme_admin": false, 00:05:27.035 "nvme_io": false 00:05:27.035 }, 00:05:27.035 "memory_domains": [ 00:05:27.035 { 00:05:27.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.035 "dma_device_type": 2 00:05:27.035 } 00:05:27.035 ], 00:05:27.035 "driver_specific": {} 00:05:27.035 }, 00:05:27.035 { 00:05:27.035 "name": "Passthru0", 00:05:27.035 "aliases": [ 00:05:27.035 "843e32b8-01d9-5f3e-9f1e-4fe8b0988585" 00:05:27.035 ], 00:05:27.035 "product_name": "passthru", 00:05:27.035 "block_size": 512, 00:05:27.035 "num_blocks": 16384, 00:05:27.035 "uuid": "843e32b8-01d9-5f3e-9f1e-4fe8b0988585", 00:05:27.035 "assigned_rate_limits": { 00:05:27.035 "rw_ios_per_sec": 0, 00:05:27.035 "rw_mbytes_per_sec": 0, 00:05:27.035 "r_mbytes_per_sec": 0, 00:05:27.035 "w_mbytes_per_sec": 0 00:05:27.035 }, 00:05:27.035 "claimed": false, 00:05:27.035 "zoned": false, 00:05:27.035 "supported_io_types": { 00:05:27.035 "read": true, 00:05:27.035 "write": true, 00:05:27.035 "unmap": true, 00:05:27.035 "write_zeroes": true, 00:05:27.035 "flush": true, 00:05:27.035 "reset": true, 00:05:27.035 "compare": false, 00:05:27.035 "compare_and_write": false, 00:05:27.035 "abort": true, 00:05:27.035 "nvme_admin": false, 00:05:27.035 "nvme_io": false 00:05:27.035 }, 00:05:27.035 "memory_domains": [ 00:05:27.035 { 00:05:27.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.035 "dma_device_type": 2 00:05:27.035 } 00:05:27.035 ], 00:05:27.035 "driver_specific": { 00:05:27.035 "passthru": { 00:05:27.035 "name": "Passthru0", 00:05:27.035 "base_bdev_name": "Malloc2" 00:05:27.035 } 00:05:27.035 } 00:05:27.035 } 00:05:27.035 ]' 00:05:27.035 05:02:45 -- rpc/rpc.sh@21 -- # jq length 00:05:27.035 05:02:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.035 05:02:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.035 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.035 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 05:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.035 05:02:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.035 05:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.035 05:02:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 05:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.035 05:02:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.035 05:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.035 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 05:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.035 05:02:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.035 05:02:46 -- rpc/rpc.sh@26 -- # jq length 00:05:27.035 ************************************ 00:05:27.035 END TEST rpc_daemon_integrity 00:05:27.035 ************************************ 00:05:27.035 05:02:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.035 00:05:27.035 real 0m0.347s 00:05:27.035 user 0m0.177s 00:05:27.035 sys 0m0.063s 00:05:27.035 05:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.035 05:02:46 -- common/autotest_common.sh@10 -- # set +x 00:05:27.035 05:02:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.035 05:02:46 -- rpc/rpc.sh@84 -- # killprocess 56488 00:05:27.035 05:02:46 -- common/autotest_common.sh@926 -- # '[' -z 56488 ']' 00:05:27.035 05:02:46 -- common/autotest_common.sh@930 -- # kill -0 56488 00:05:27.035 05:02:46 -- common/autotest_common.sh@931 -- # uname 00:05:27.035 05:02:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.035 05:02:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56488 00:05:27.295 killing process with pid 56488 00:05:27.295 05:02:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.295 05:02:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.295 05:02:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56488' 00:05:27.295 05:02:46 -- common/autotest_common.sh@945 -- # kill 56488 00:05:27.295 05:02:46 -- common/autotest_common.sh@950 -- # wait 56488 00:05:29.833 ************************************ 00:05:29.833 END TEST rpc 00:05:29.833 ************************************ 00:05:29.833 00:05:29.833 real 0m5.668s 00:05:29.833 user 0m6.438s 00:05:29.833 sys 0m0.939s 00:05:29.833 05:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.833 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.833 05:02:48 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.833 05:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.833 05:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.833 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.833 ************************************ 00:05:29.833 START TEST rpc_client 00:05:29.833 ************************************ 00:05:29.833 05:02:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.833 * Looking for test storage... 00:05:29.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:29.833 05:02:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:29.833 OK 00:05:29.833 05:02:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.833 00:05:29.833 real 0m0.158s 00:05:29.833 user 0m0.056s 00:05:29.833 sys 0m0.109s 00:05:29.833 05:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.833 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.833 ************************************ 00:05:29.833 END TEST rpc_client 00:05:29.833 ************************************ 00:05:29.833 05:02:48 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.833 05:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.833 05:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.833 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:29.833 ************************************ 00:05:29.833 START TEST json_config 00:05:29.833 ************************************ 00:05:29.833 05:02:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:30.094 05:02:48 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:30.094 05:02:48 -- nvmf/common.sh@7 -- # uname -s 00:05:30.094 05:02:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.094 05:02:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.094 05:02:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.094 05:02:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.094 05:02:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.094 05:02:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.094 05:02:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.094 05:02:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.094 05:02:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.094 05:02:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.094 05:02:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b65a3717-0a3d-4378-888e-eeb94342b361 00:05:30.094 05:02:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=b65a3717-0a3d-4378-888e-eeb94342b361 00:05:30.094 05:02:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.094 05:02:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.094 05:02:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.094 05:02:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:30.094 05:02:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.094 05:02:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.094 05:02:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.094 05:02:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:48 -- paths/export.sh@5 -- # export PATH 00:05:30.094 05:02:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:48 -- nvmf/common.sh@46 -- # : 0 00:05:30.094 05:02:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:30.094 05:02:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:30.094 05:02:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:30.094 05:02:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.094 05:02:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.094 05:02:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:30.094 05:02:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:30.094 05:02:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:30.094 05:02:48 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:30.094 05:02:48 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:30.094 05:02:48 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:30.094 05:02:48 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.094 05:02:48 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:30.094 WARNING: No tests are enabled so not running JSON configuration tests 00:05:30.094 05:02:48 -- json_config/json_config.sh@27 -- # exit 0 00:05:30.094 00:05:30.094 real 0m0.085s 00:05:30.094 user 0m0.045s 00:05:30.094 sys 0m0.038s 00:05:30.094 05:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.094 05:02:48 -- common/autotest_common.sh@10 -- # set +x 00:05:30.094 ************************************ 00:05:30.094 END TEST json_config 00:05:30.094 ************************************ 00:05:30.094 05:02:49 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:30.094 05:02:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.094 05:02:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.094 05:02:49 -- common/autotest_common.sh@10 -- # set +x 00:05:30.094 ************************************ 00:05:30.094 START TEST json_config_extra_key 00:05:30.094 ************************************ 00:05:30.094 05:02:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:30.094 05:02:49 -- nvmf/common.sh@7 -- # uname -s 00:05:30.094 05:02:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.094 05:02:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.094 05:02:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.094 05:02:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.094 05:02:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.094 05:02:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.094 05:02:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.094 05:02:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.094 05:02:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.094 05:02:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.094 05:02:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b65a3717-0a3d-4378-888e-eeb94342b361 00:05:30.094 05:02:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=b65a3717-0a3d-4378-888e-eeb94342b361 00:05:30.094 05:02:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.094 05:02:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.094 05:02:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.094 05:02:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:30.094 05:02:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.094 05:02:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.094 05:02:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.094 05:02:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:49 -- paths/export.sh@5 -- # export PATH 00:05:30.094 05:02:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.094 05:02:49 -- nvmf/common.sh@46 -- # : 0 00:05:30.094 05:02:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:30.094 05:02:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:30.094 05:02:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:30.094 05:02:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.094 05:02:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.094 05:02:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:30.094 05:02:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:30.094 05:02:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:30.094 INFO: launching applications... 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:30.094 Waiting for target to run... 00:05:30.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.094 05:02:49 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56793 00:05:30.095 05:02:49 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:30.095 05:02:49 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56793 /var/tmp/spdk_tgt.sock 00:05:30.095 05:02:49 -- common/autotest_common.sh@819 -- # '[' -z 56793 ']' 00:05:30.095 05:02:49 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:30.095 05:02:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.095 05:02:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.095 05:02:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.095 05:02:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.095 05:02:49 -- common/autotest_common.sh@10 -- # set +x 00:05:30.354 [2024-07-26 05:02:49.262703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:30.354 [2024-07-26 05:02:49.263641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56793 ] 00:05:30.613 [2024-07-26 05:02:49.691845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.872 [2024-07-26 05:02:49.971158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.872 [2024-07-26 05:02:49.971369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.811 00:05:31.811 INFO: shutting down applications... 00:05:31.811 05:02:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.811 05:02:50 -- common/autotest_common.sh@852 -- # return 0 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56793 ]] 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56793 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:31.811 05:02:50 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:32.379 05:02:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:32.379 05:02:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:32.379 05:02:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:32.379 05:02:51 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:32.948 05:02:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:32.948 05:02:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:32.948 05:02:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:32.948 05:02:51 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:33.207 05:02:52 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:33.207 05:02:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:33.207 05:02:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:33.207 05:02:52 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:33.776 05:02:52 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:33.776 05:02:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:33.776 05:02:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:33.776 05:02:52 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:34.344 05:02:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:34.344 05:02:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.344 05:02:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:34.344 05:02:53 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56793 00:05:34.912 SPDK target shutdown done 00:05:34.912 Success 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:34.912 05:02:53 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:34.912 00:05:34.912 real 0m4.783s 00:05:34.912 user 0m4.448s 00:05:34.912 sys 0m0.636s 00:05:34.912 05:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.912 ************************************ 00:05:34.912 END TEST json_config_extra_key 00:05:34.912 ************************************ 00:05:34.912 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:34.912 05:02:53 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.912 05:02:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.912 05:02:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.912 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:34.912 ************************************ 00:05:34.912 START TEST alias_rpc 00:05:34.912 ************************************ 00:05:34.912 05:02:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.912 * Looking for test storage... 00:05:34.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:34.913 05:02:53 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.913 05:02:53 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56898 00:05:34.913 05:02:53 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56898 00:05:34.913 05:02:53 -- common/autotest_common.sh@819 -- # '[' -z 56898 ']' 00:05:34.913 05:02:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.913 05:02:53 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.913 05:02:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.913 05:02:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.913 05:02:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.913 05:02:53 -- common/autotest_common.sh@10 -- # set +x 00:05:35.181 [2024-07-26 05:02:54.114632] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:35.181 [2024-07-26 05:02:54.114786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56898 ] 00:05:35.458 [2024-07-26 05:02:54.296459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.458 [2024-07-26 05:02:54.527376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.458 [2024-07-26 05:02:54.527564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.836 05:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.836 05:02:55 -- common/autotest_common.sh@852 -- # return 0 00:05:36.836 05:02:55 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.836 05:02:55 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56898 00:05:36.836 05:02:55 -- common/autotest_common.sh@926 -- # '[' -z 56898 ']' 00:05:36.836 05:02:55 -- common/autotest_common.sh@930 -- # kill -0 56898 00:05:36.836 05:02:55 -- common/autotest_common.sh@931 -- # uname 00:05:36.836 05:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.836 05:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56898 00:05:36.836 killing process with pid 56898 00:05:36.836 05:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.836 05:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.836 05:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56898' 00:05:36.836 05:02:55 -- common/autotest_common.sh@945 -- # kill 56898 00:05:36.836 05:02:55 -- common/autotest_common.sh@950 -- # wait 56898 00:05:39.367 ************************************ 00:05:39.367 END TEST alias_rpc 00:05:39.367 ************************************ 00:05:39.367 00:05:39.367 real 0m4.436s 00:05:39.367 user 0m4.603s 00:05:39.367 sys 0m0.581s 00:05:39.367 05:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.367 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.367 05:02:58 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:39.367 05:02:58 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.367 05:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.367 05:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.367 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.367 ************************************ 00:05:39.367 START TEST spdkcli_tcp 00:05:39.368 ************************************ 00:05:39.368 05:02:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:39.368 * Looking for test storage... 00:05:39.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:39.627 05:02:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:39.627 05:02:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.627 05:02:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.627 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57003 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@27 -- # waitforlisten 57003 00:05:39.627 05:02:58 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.627 05:02:58 -- common/autotest_common.sh@819 -- # '[' -z 57003 ']' 00:05:39.627 05:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.627 05:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.627 05:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.627 05:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.627 05:02:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.627 [2024-07-26 05:02:58.618284] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:39.627 [2024-07-26 05:02:58.618450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57003 ] 00:05:39.886 [2024-07-26 05:02:58.807766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.144 [2024-07-26 05:02:59.143281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.144 [2024-07-26 05:02:59.143741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.144 [2024-07-26 05:02:59.143785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.044 05:03:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.044 05:03:00 -- common/autotest_common.sh@852 -- # return 0 00:05:42.044 05:03:00 -- spdkcli/tcp.sh@31 -- # socat_pid=57041 00:05:42.044 05:03:00 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.044 05:03:00 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.044 [ 00:05:42.044 "bdev_malloc_delete", 00:05:42.044 "bdev_malloc_create", 00:05:42.044 "bdev_null_resize", 00:05:42.044 "bdev_null_delete", 00:05:42.044 "bdev_null_create", 00:05:42.044 "bdev_nvme_cuse_unregister", 00:05:42.044 "bdev_nvme_cuse_register", 00:05:42.044 "bdev_opal_new_user", 00:05:42.044 "bdev_opal_set_lock_state", 00:05:42.044 "bdev_opal_delete", 00:05:42.044 "bdev_opal_get_info", 00:05:42.044 "bdev_opal_create", 00:05:42.044 "bdev_nvme_opal_revert", 00:05:42.044 "bdev_nvme_opal_init", 00:05:42.044 "bdev_nvme_send_cmd", 00:05:42.044 "bdev_nvme_get_path_iostat", 00:05:42.044 "bdev_nvme_get_mdns_discovery_info", 00:05:42.044 "bdev_nvme_stop_mdns_discovery", 00:05:42.044 "bdev_nvme_start_mdns_discovery", 00:05:42.044 "bdev_nvme_set_multipath_policy", 00:05:42.044 "bdev_nvme_set_preferred_path", 00:05:42.044 "bdev_nvme_get_io_paths", 00:05:42.044 "bdev_nvme_remove_error_injection", 00:05:42.044 "bdev_nvme_add_error_injection", 00:05:42.044 "bdev_nvme_get_discovery_info", 00:05:42.044 "bdev_nvme_stop_discovery", 00:05:42.044 "bdev_nvme_start_discovery", 00:05:42.044 "bdev_nvme_get_controller_health_info", 00:05:42.044 "bdev_nvme_disable_controller", 00:05:42.044 "bdev_nvme_enable_controller", 00:05:42.044 "bdev_nvme_reset_controller", 00:05:42.044 "bdev_nvme_get_transport_statistics", 00:05:42.044 "bdev_nvme_apply_firmware", 00:05:42.044 "bdev_nvme_detach_controller", 00:05:42.044 "bdev_nvme_get_controllers", 00:05:42.044 "bdev_nvme_attach_controller", 00:05:42.044 "bdev_nvme_set_hotplug", 00:05:42.044 "bdev_nvme_set_options", 00:05:42.044 "bdev_passthru_delete", 00:05:42.044 "bdev_passthru_create", 00:05:42.044 "bdev_lvol_grow_lvstore", 00:05:42.044 "bdev_lvol_get_lvols", 00:05:42.044 "bdev_lvol_get_lvstores", 00:05:42.044 "bdev_lvol_delete", 00:05:42.044 "bdev_lvol_set_read_only", 00:05:42.044 "bdev_lvol_resize", 00:05:42.044 "bdev_lvol_decouple_parent", 00:05:42.044 "bdev_lvol_inflate", 00:05:42.044 "bdev_lvol_rename", 00:05:42.044 "bdev_lvol_clone_bdev", 00:05:42.044 "bdev_lvol_clone", 00:05:42.044 "bdev_lvol_snapshot", 00:05:42.044 "bdev_lvol_create", 00:05:42.044 "bdev_lvol_delete_lvstore", 00:05:42.044 "bdev_lvol_rename_lvstore", 00:05:42.044 "bdev_lvol_create_lvstore", 00:05:42.044 "bdev_raid_set_options", 00:05:42.044 "bdev_raid_remove_base_bdev", 00:05:42.044 "bdev_raid_add_base_bdev", 00:05:42.044 "bdev_raid_delete", 00:05:42.044 "bdev_raid_create", 00:05:42.044 "bdev_raid_get_bdevs", 00:05:42.044 "bdev_error_inject_error", 00:05:42.044 "bdev_error_delete", 00:05:42.044 "bdev_error_create", 00:05:42.044 "bdev_split_delete", 00:05:42.044 "bdev_split_create", 00:05:42.044 "bdev_delay_delete", 00:05:42.044 "bdev_delay_create", 00:05:42.044 "bdev_delay_update_latency", 00:05:42.044 "bdev_zone_block_delete", 00:05:42.044 "bdev_zone_block_create", 00:05:42.044 "blobfs_create", 00:05:42.044 "blobfs_detect", 00:05:42.044 "blobfs_set_cache_size", 00:05:42.044 "bdev_xnvme_delete", 00:05:42.044 "bdev_xnvme_create", 00:05:42.044 "bdev_aio_delete", 00:05:42.044 "bdev_aio_rescan", 00:05:42.044 "bdev_aio_create", 00:05:42.044 "bdev_ftl_set_property", 00:05:42.044 "bdev_ftl_get_properties", 00:05:42.044 "bdev_ftl_get_stats", 00:05:42.044 "bdev_ftl_unmap", 00:05:42.044 "bdev_ftl_unload", 00:05:42.044 "bdev_ftl_delete", 00:05:42.044 "bdev_ftl_load", 00:05:42.045 "bdev_ftl_create", 00:05:42.045 "bdev_virtio_attach_controller", 00:05:42.045 "bdev_virtio_scsi_get_devices", 00:05:42.045 "bdev_virtio_detach_controller", 00:05:42.045 "bdev_virtio_blk_set_hotplug", 00:05:42.045 "bdev_iscsi_delete", 00:05:42.045 "bdev_iscsi_create", 00:05:42.045 "bdev_iscsi_set_options", 00:05:42.045 "accel_error_inject_error", 00:05:42.045 "ioat_scan_accel_module", 00:05:42.045 "dsa_scan_accel_module", 00:05:42.045 "iaa_scan_accel_module", 00:05:42.045 "iscsi_set_options", 00:05:42.045 "iscsi_get_auth_groups", 00:05:42.045 "iscsi_auth_group_remove_secret", 00:05:42.045 "iscsi_auth_group_add_secret", 00:05:42.045 "iscsi_delete_auth_group", 00:05:42.045 "iscsi_create_auth_group", 00:05:42.045 "iscsi_set_discovery_auth", 00:05:42.045 "iscsi_get_options", 00:05:42.045 "iscsi_target_node_request_logout", 00:05:42.045 "iscsi_target_node_set_redirect", 00:05:42.045 "iscsi_target_node_set_auth", 00:05:42.045 "iscsi_target_node_add_lun", 00:05:42.045 "iscsi_get_connections", 00:05:42.045 "iscsi_portal_group_set_auth", 00:05:42.045 "iscsi_start_portal_group", 00:05:42.045 "iscsi_delete_portal_group", 00:05:42.045 "iscsi_create_portal_group", 00:05:42.045 "iscsi_get_portal_groups", 00:05:42.045 "iscsi_delete_target_node", 00:05:42.045 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.045 "iscsi_target_node_add_pg_ig_maps", 00:05:42.045 "iscsi_create_target_node", 00:05:42.045 "iscsi_get_target_nodes", 00:05:42.045 "iscsi_delete_initiator_group", 00:05:42.045 "iscsi_initiator_group_remove_initiators", 00:05:42.045 "iscsi_initiator_group_add_initiators", 00:05:42.045 "iscsi_create_initiator_group", 00:05:42.045 "iscsi_get_initiator_groups", 00:05:42.045 "nvmf_set_crdt", 00:05:42.045 "nvmf_set_config", 00:05:42.045 "nvmf_set_max_subsystems", 00:05:42.045 "nvmf_subsystem_get_listeners", 00:05:42.045 "nvmf_subsystem_get_qpairs", 00:05:42.045 "nvmf_subsystem_get_controllers", 00:05:42.045 "nvmf_get_stats", 00:05:42.045 "nvmf_get_transports", 00:05:42.045 "nvmf_create_transport", 00:05:42.045 "nvmf_get_targets", 00:05:42.045 "nvmf_delete_target", 00:05:42.045 "nvmf_create_target", 00:05:42.045 "nvmf_subsystem_allow_any_host", 00:05:42.045 "nvmf_subsystem_remove_host", 00:05:42.045 "nvmf_subsystem_add_host", 00:05:42.045 "nvmf_subsystem_remove_ns", 00:05:42.045 "nvmf_subsystem_add_ns", 00:05:42.045 "nvmf_subsystem_listener_set_ana_state", 00:05:42.045 "nvmf_discovery_get_referrals", 00:05:42.045 "nvmf_discovery_remove_referral", 00:05:42.045 "nvmf_discovery_add_referral", 00:05:42.045 "nvmf_subsystem_remove_listener", 00:05:42.045 "nvmf_subsystem_add_listener", 00:05:42.045 "nvmf_delete_subsystem", 00:05:42.045 "nvmf_create_subsystem", 00:05:42.045 "nvmf_get_subsystems", 00:05:42.045 "env_dpdk_get_mem_stats", 00:05:42.045 "nbd_get_disks", 00:05:42.045 "nbd_stop_disk", 00:05:42.045 "nbd_start_disk", 00:05:42.045 "ublk_recover_disk", 00:05:42.045 "ublk_get_disks", 00:05:42.045 "ublk_stop_disk", 00:05:42.045 "ublk_start_disk", 00:05:42.045 "ublk_destroy_target", 00:05:42.045 "ublk_create_target", 00:05:42.045 "virtio_blk_create_transport", 00:05:42.045 "virtio_blk_get_transports", 00:05:42.045 "vhost_controller_set_coalescing", 00:05:42.045 "vhost_get_controllers", 00:05:42.045 "vhost_delete_controller", 00:05:42.045 "vhost_create_blk_controller", 00:05:42.045 "vhost_scsi_controller_remove_target", 00:05:42.045 "vhost_scsi_controller_add_target", 00:05:42.045 "vhost_start_scsi_controller", 00:05:42.045 "vhost_create_scsi_controller", 00:05:42.045 "thread_set_cpumask", 00:05:42.045 "framework_get_scheduler", 00:05:42.045 "framework_set_scheduler", 00:05:42.045 "framework_get_reactors", 00:05:42.045 "thread_get_io_channels", 00:05:42.045 "thread_get_pollers", 00:05:42.045 "thread_get_stats", 00:05:42.045 "framework_monitor_context_switch", 00:05:42.045 "spdk_kill_instance", 00:05:42.045 "log_enable_timestamps", 00:05:42.045 "log_get_flags", 00:05:42.045 "log_clear_flag", 00:05:42.045 "log_set_flag", 00:05:42.045 "log_get_level", 00:05:42.045 "log_set_level", 00:05:42.045 "log_get_print_level", 00:05:42.045 "log_set_print_level", 00:05:42.045 "framework_enable_cpumask_locks", 00:05:42.045 "framework_disable_cpumask_locks", 00:05:42.045 "framework_wait_init", 00:05:42.045 "framework_start_init", 00:05:42.045 "scsi_get_devices", 00:05:42.045 "bdev_get_histogram", 00:05:42.045 "bdev_enable_histogram", 00:05:42.045 "bdev_set_qos_limit", 00:05:42.045 "bdev_set_qd_sampling_period", 00:05:42.045 "bdev_get_bdevs", 00:05:42.045 "bdev_reset_iostat", 00:05:42.045 "bdev_get_iostat", 00:05:42.045 "bdev_examine", 00:05:42.045 "bdev_wait_for_examine", 00:05:42.045 "bdev_set_options", 00:05:42.045 "notify_get_notifications", 00:05:42.045 "notify_get_types", 00:05:42.045 "accel_get_stats", 00:05:42.045 "accel_set_options", 00:05:42.045 "accel_set_driver", 00:05:42.045 "accel_crypto_key_destroy", 00:05:42.045 "accel_crypto_keys_get", 00:05:42.045 "accel_crypto_key_create", 00:05:42.045 "accel_assign_opc", 00:05:42.045 "accel_get_module_info", 00:05:42.045 "accel_get_opc_assignments", 00:05:42.045 "vmd_rescan", 00:05:42.045 "vmd_remove_device", 00:05:42.045 "vmd_enable", 00:05:42.045 "sock_set_default_impl", 00:05:42.045 "sock_impl_set_options", 00:05:42.045 "sock_impl_get_options", 00:05:42.045 "iobuf_get_stats", 00:05:42.045 "iobuf_set_options", 00:05:42.045 "framework_get_pci_devices", 00:05:42.045 "framework_get_config", 00:05:42.045 "framework_get_subsystems", 00:05:42.045 "trace_get_info", 00:05:42.045 "trace_get_tpoint_group_mask", 00:05:42.045 "trace_disable_tpoint_group", 00:05:42.045 "trace_enable_tpoint_group", 00:05:42.045 "trace_clear_tpoint_mask", 00:05:42.045 "trace_set_tpoint_mask", 00:05:42.045 "spdk_get_version", 00:05:42.045 "rpc_get_methods" 00:05:42.045 ] 00:05:42.045 05:03:01 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.045 05:03:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.045 05:03:01 -- common/autotest_common.sh@10 -- # set +x 00:05:42.045 05:03:01 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.045 05:03:01 -- spdkcli/tcp.sh@38 -- # killprocess 57003 00:05:42.045 05:03:01 -- common/autotest_common.sh@926 -- # '[' -z 57003 ']' 00:05:42.045 05:03:01 -- common/autotest_common.sh@930 -- # kill -0 57003 00:05:42.045 05:03:01 -- common/autotest_common.sh@931 -- # uname 00:05:42.045 05:03:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.045 05:03:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57003 00:05:42.045 killing process with pid 57003 00:05:42.045 05:03:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.045 05:03:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.045 05:03:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57003' 00:05:42.045 05:03:01 -- common/autotest_common.sh@945 -- # kill 57003 00:05:42.045 05:03:01 -- common/autotest_common.sh@950 -- # wait 57003 00:05:45.368 ************************************ 00:05:45.368 END TEST spdkcli_tcp 00:05:45.368 ************************************ 00:05:45.368 00:05:45.368 real 0m5.418s 00:05:45.368 user 0m9.684s 00:05:45.368 sys 0m0.837s 00:05:45.368 05:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.368 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.368 05:03:03 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.368 05:03:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.368 05:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.368 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.368 ************************************ 00:05:45.368 START TEST dpdk_mem_utility 00:05:45.368 ************************************ 00:05:45.368 05:03:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.368 * Looking for test storage... 00:05:45.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:45.368 05:03:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.368 05:03:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57137 00:05:45.368 05:03:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.368 05:03:03 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57137 00:05:45.368 05:03:03 -- common/autotest_common.sh@819 -- # '[' -z 57137 ']' 00:05:45.368 05:03:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.368 05:03:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.368 05:03:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.368 05:03:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.368 05:03:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.368 [2024-07-26 05:03:04.045622] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.368 [2024-07-26 05:03:04.046012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57137 ] 00:05:45.368 [2024-07-26 05:03:04.209129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.626 [2024-07-26 05:03:04.476407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.626 [2024-07-26 05:03:04.476901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.004 05:03:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.004 05:03:05 -- common/autotest_common.sh@852 -- # return 0 00:05:47.004 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.004 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.004 05:03:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.004 05:03:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.004 { 00:05:47.004 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.004 } 00:05:47.004 05:03:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.004 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.004 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:47.004 1 heaps totaling size 820.000000 MiB 00:05:47.004 size: 820.000000 MiB heap id: 0 00:05:47.004 end heaps---------- 00:05:47.004 8 mempools totaling size 598.116089 MiB 00:05:47.004 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.004 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.004 size: 84.521057 MiB name: bdev_io_57137 00:05:47.004 size: 51.011292 MiB name: evtpool_57137 00:05:47.004 size: 50.003479 MiB name: msgpool_57137 00:05:47.004 size: 21.763794 MiB name: PDU_Pool 00:05:47.004 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.004 size: 0.026123 MiB name: Session_Pool 00:05:47.004 end mempools------- 00:05:47.004 6 memzones totaling size 4.142822 MiB 00:05:47.004 size: 1.000366 MiB name: RG_ring_0_57137 00:05:47.004 size: 1.000366 MiB name: RG_ring_1_57137 00:05:47.004 size: 1.000366 MiB name: RG_ring_4_57137 00:05:47.004 size: 1.000366 MiB name: RG_ring_5_57137 00:05:47.004 size: 0.125366 MiB name: RG_ring_2_57137 00:05:47.004 size: 0.015991 MiB name: RG_ring_3_57137 00:05:47.004 end memzones------- 00:05:47.004 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.004 heap id: 0 total size: 820.000000 MiB number of busy elements: 302 number of free elements: 18 00:05:47.004 list of free elements. size: 18.451050 MiB 00:05:47.004 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:47.004 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:47.004 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:47.004 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:47.004 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:47.004 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:47.004 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:47.004 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:47.004 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:47.004 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:47.004 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:47.004 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:47.004 element at address: 0x20001b000000 with size: 0.564392 MiB 00:05:47.004 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:47.004 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:47.004 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:47.004 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:47.004 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:47.004 list of standard malloc elements. size: 199.284546 MiB 00:05:47.004 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:47.004 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:47.004 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:47.004 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:47.004 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:47.004 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:47.004 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:47.004 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:47.004 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:47.004 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:47.004 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:47.004 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:47.004 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:47.005 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:47.006 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:47.006 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:47.006 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:47.007 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:47.007 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:47.007 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:47.007 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:47.007 list of memzone associated elements. size: 602.264404 MiB 00:05:47.007 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:47.007 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.007 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:47.007 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.007 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:47.007 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_57137_0 00:05:47.007 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:47.007 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57137_0 00:05:47.007 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:47.007 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57137_0 00:05:47.007 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:47.007 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.007 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:47.007 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.007 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:47.007 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57137 00:05:47.007 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:47.007 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57137 00:05:47.007 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:47.007 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57137 00:05:47.007 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:47.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.007 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:47.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.007 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:47.007 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.007 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:47.007 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.007 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:47.007 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57137 00:05:47.007 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:47.007 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57137 00:05:47.007 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:47.007 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57137 00:05:47.007 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:47.007 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57137 00:05:47.007 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:47.007 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57137 00:05:47.007 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:47.007 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.007 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:47.007 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.007 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:47.007 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.007 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:47.007 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57137 00:05:47.007 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:47.007 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.007 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:47.007 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.007 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:47.007 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57137 00:05:47.007 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:47.007 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.007 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:47.007 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57137 00:05:47.007 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:47.007 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57137 00:05:47.007 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:47.007 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.007 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.007 05:03:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57137 00:05:47.007 05:03:05 -- common/autotest_common.sh@926 -- # '[' -z 57137 ']' 00:05:47.007 05:03:05 -- common/autotest_common.sh@930 -- # kill -0 57137 00:05:47.007 05:03:05 -- common/autotest_common.sh@931 -- # uname 00:05:47.007 05:03:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.007 05:03:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57137 00:05:47.007 killing process with pid 57137 00:05:47.007 05:03:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.007 05:03:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.007 05:03:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57137' 00:05:47.007 05:03:05 -- common/autotest_common.sh@945 -- # kill 57137 00:05:47.007 05:03:05 -- common/autotest_common.sh@950 -- # wait 57137 00:05:49.540 00:05:49.540 real 0m4.661s 00:05:49.540 user 0m4.641s 00:05:49.540 sys 0m0.750s 00:05:49.540 05:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.540 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.540 ************************************ 00:05:49.540 END TEST dpdk_mem_utility 00:05:49.540 ************************************ 00:05:49.540 05:03:08 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.540 05:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.540 05:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.540 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.540 ************************************ 00:05:49.540 START TEST event 00:05:49.540 ************************************ 00:05:49.540 05:03:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:49.800 * Looking for test storage... 00:05:49.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:49.800 05:03:08 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:49.800 05:03:08 -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.800 05:03:08 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.800 05:03:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:49.800 05:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.800 05:03:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 START TEST event_perf 00:05:49.800 ************************************ 00:05:49.800 05:03:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.800 Running I/O for 1 seconds...[2024-07-26 05:03:08.768086] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.800 [2024-07-26 05:03:08.768320] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57238 ] 00:05:50.059 [2024-07-26 05:03:08.973414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.317 [2024-07-26 05:03:09.251152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.317 [2024-07-26 05:03:09.251373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.317 Running I/O for 1 seconds...[2024-07-26 05:03:09.251542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.317 [2024-07-26 05:03:09.251650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.693 00:05:51.693 lcore 0: 104088 00:05:51.693 lcore 1: 104085 00:05:51.693 lcore 2: 104086 00:05:51.693 lcore 3: 104089 00:05:51.693 done. 00:05:51.693 00:05:51.693 real 0m2.042s 00:05:51.693 user 0m4.718s 00:05:51.693 sys 0m0.204s 00:05:51.693 05:03:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.693 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:05:51.693 ************************************ 00:05:51.693 END TEST event_perf 00:05:51.693 ************************************ 00:05:51.693 05:03:10 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.693 05:03:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:51.693 05:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.693 05:03:10 -- common/autotest_common.sh@10 -- # set +x 00:05:51.952 ************************************ 00:05:51.952 START TEST event_reactor 00:05:51.952 ************************************ 00:05:51.952 05:03:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.952 [2024-07-26 05:03:10.851991] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.952 [2024-07-26 05:03:10.852273] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57283 ] 00:05:51.952 [2024-07-26 05:03:11.015511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.210 [2024-07-26 05:03:11.279487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.140 test_start 00:05:54.140 oneshot 00:05:54.140 tick 100 00:05:54.140 tick 100 00:05:54.140 tick 250 00:05:54.140 tick 100 00:05:54.140 tick 100 00:05:54.140 tick 100 00:05:54.140 tick 250 00:05:54.140 tick 500 00:05:54.140 tick 100 00:05:54.140 tick 100 00:05:54.140 tick 250 00:05:54.140 tick 100 00:05:54.140 tick 100 00:05:54.140 test_end 00:05:54.140 ************************************ 00:05:54.140 END TEST event_reactor 00:05:54.140 ************************************ 00:05:54.140 00:05:54.140 real 0m1.940s 00:05:54.140 user 0m1.697s 00:05:54.140 sys 0m0.132s 00:05:54.140 05:03:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.140 05:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.140 05:03:12 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.140 05:03:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:54.140 05:03:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.140 05:03:12 -- common/autotest_common.sh@10 -- # set +x 00:05:54.140 ************************************ 00:05:54.140 START TEST event_reactor_perf 00:05:54.140 ************************************ 00:05:54.140 05:03:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.140 [2024-07-26 05:03:12.858035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:54.140 [2024-07-26 05:03:12.858154] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57325 ] 00:05:54.140 [2024-07-26 05:03:13.023218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.399 [2024-07-26 05:03:13.297141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.776 test_start 00:05:55.776 test_end 00:05:55.776 Performance: 378181 events per second 00:05:55.776 00:05:55.776 real 0m1.957s 00:05:55.776 user 0m1.709s 00:05:55.776 sys 0m0.138s 00:05:55.776 ************************************ 00:05:55.776 END TEST event_reactor_perf 00:05:55.776 ************************************ 00:05:55.776 05:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.776 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.776 05:03:14 -- event/event.sh@49 -- # uname -s 00:05:55.776 05:03:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.776 05:03:14 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.776 05:03:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.776 05:03:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.776 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:55.776 ************************************ 00:05:55.776 START TEST event_scheduler 00:05:55.776 ************************************ 00:05:55.776 05:03:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:56.035 * Looking for test storage... 00:05:56.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:56.035 05:03:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.035 05:03:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=57392 00:05:56.035 05:03:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.035 05:03:14 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.035 05:03:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 57392 00:05:56.035 05:03:14 -- common/autotest_common.sh@819 -- # '[' -z 57392 ']' 00:05:56.035 05:03:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.035 05:03:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.035 05:03:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.035 05:03:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.035 05:03:14 -- common/autotest_common.sh@10 -- # set +x 00:05:56.035 [2024-07-26 05:03:15.061040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.035 [2024-07-26 05:03:15.061739] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57392 ] 00:05:56.293 [2024-07-26 05:03:15.253429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.552 [2024-07-26 05:03:15.554011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.552 [2024-07-26 05:03:15.554131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.552 [2024-07-26 05:03:15.554276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.552 [2024-07-26 05:03:15.554303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.120 05:03:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.120 05:03:15 -- common/autotest_common.sh@852 -- # return 0 00:05:57.120 05:03:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.120 05:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.120 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.120 POWER: Env isn't set yet! 00:05:57.120 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:57.120 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.120 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.120 POWER: Attempting to initialise PSTAT power management... 00:05:57.120 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.120 POWER: Cannot set governor of lcore 0 to performance 00:05:57.120 POWER: Attempting to initialise AMD PSTATE power management... 00:05:57.120 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.120 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.120 POWER: Attempting to initialise CPPC power management... 00:05:57.120 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.120 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.120 POWER: Attempting to initialise VM power management... 00:05:57.120 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.120 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.120 [2024-07-26 05:03:15.972662] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.120 [2024-07-26 05:03:15.972684] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:57.120 [2024-07-26 05:03:15.972698] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.120 [2024-07-26 05:03:15.972718] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.120 [2024-07-26 05:03:15.972732] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.120 [2024-07-26 05:03:15.972742] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.120 05:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.120 05:03:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.120 05:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.121 05:03:15 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 [2024-07-26 05:03:16.353898] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.379 05:03:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.379 05:03:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 ************************************ 00:05:57.379 START TEST scheduler_create_thread 00:05:57.379 ************************************ 00:05:57.379 05:03:16 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 2 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 3 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 4 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 5 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 6 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 7 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 8 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 9 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 10 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.379 05:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.379 05:03:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.379 05:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.379 05:03:16 -- common/autotest_common.sh@10 -- # set +x 00:05:58.757 05:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.757 05:03:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.757 05:03:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.757 05:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.757 05:03:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.694 05:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.694 00:05:59.694 real 0m2.137s 00:05:59.694 user 0m0.016s 00:05:59.694 sys 0m0.010s 00:05:59.694 ************************************ 00:05:59.694 END TEST scheduler_create_thread 00:05:59.694 ************************************ 00:05:59.694 05:03:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.694 05:03:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.694 05:03:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.694 05:03:18 -- scheduler/scheduler.sh@46 -- # killprocess 57392 00:05:59.694 05:03:18 -- common/autotest_common.sh@926 -- # '[' -z 57392 ']' 00:05:59.694 05:03:18 -- common/autotest_common.sh@930 -- # kill -0 57392 00:05:59.694 05:03:18 -- common/autotest_common.sh@931 -- # uname 00:05:59.694 05:03:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.694 05:03:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57392 00:05:59.694 killing process with pid 57392 00:05:59.694 05:03:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:59.694 05:03:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:59.694 05:03:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57392' 00:05:59.694 05:03:18 -- common/autotest_common.sh@945 -- # kill 57392 00:05:59.694 05:03:18 -- common/autotest_common.sh@950 -- # wait 57392 00:05:59.952 [2024-07-26 05:03:18.982973] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:01.330 00:06:01.330 real 0m5.443s 00:06:01.330 user 0m8.531s 00:06:01.330 sys 0m0.505s 00:06:01.330 05:03:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.330 ************************************ 00:06:01.330 END TEST event_scheduler 00:06:01.330 ************************************ 00:06:01.330 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.330 05:03:20 -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.330 05:03:20 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.330 05:03:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.330 05:03:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.330 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.330 ************************************ 00:06:01.330 START TEST app_repeat 00:06:01.330 ************************************ 00:06:01.330 05:03:20 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:01.330 05:03:20 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.330 05:03:20 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.330 05:03:20 -- event/event.sh@13 -- # local nbd_list 00:06:01.330 05:03:20 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.330 05:03:20 -- event/event.sh@14 -- # local bdev_list 00:06:01.330 05:03:20 -- event/event.sh@15 -- # local repeat_times=4 00:06:01.330 05:03:20 -- event/event.sh@17 -- # modprobe nbd 00:06:01.330 Process app_repeat pid: 57509 00:06:01.330 spdk_app_start Round 0 00:06:01.330 05:03:20 -- event/event.sh@19 -- # repeat_pid=57509 00:06:01.330 05:03:20 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.330 05:03:20 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.330 05:03:20 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57509' 00:06:01.330 05:03:20 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.330 05:03:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.330 05:03:20 -- event/event.sh@25 -- # waitforlisten 57509 /var/tmp/spdk-nbd.sock 00:06:01.330 05:03:20 -- common/autotest_common.sh@819 -- # '[' -z 57509 ']' 00:06:01.330 05:03:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.330 05:03:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.330 05:03:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.330 05:03:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.330 05:03:20 -- common/autotest_common.sh@10 -- # set +x 00:06:01.330 [2024-07-26 05:03:20.429967] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.330 [2024-07-26 05:03:20.430134] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57509 ] 00:06:01.589 [2024-07-26 05:03:20.617636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.848 [2024-07-26 05:03:20.892529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.848 [2024-07-26 05:03:20.892562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.227 05:03:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.227 05:03:21 -- common/autotest_common.sh@852 -- # return 0 00:06:03.227 05:03:21 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.227 Malloc0 00:06:03.227 05:03:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.487 Malloc1 00:06:03.487 05:03:22 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@12 -- # local i 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.487 05:03:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.746 /dev/nbd0 00:06:03.746 05:03:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.746 05:03:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.746 05:03:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:03.746 05:03:22 -- common/autotest_common.sh@857 -- # local i 00:06:03.746 05:03:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.746 05:03:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.746 05:03:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:03.746 05:03:22 -- common/autotest_common.sh@861 -- # break 00:06:03.746 05:03:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.746 05:03:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.746 05:03:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.746 1+0 records in 00:06:03.746 1+0 records out 00:06:03.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00357066 s, 1.1 MB/s 00:06:03.746 05:03:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.746 05:03:22 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.746 05:03:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.746 05:03:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.746 05:03:22 -- common/autotest_common.sh@877 -- # return 0 00:06:03.746 05:03:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.746 05:03:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.746 05:03:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.005 /dev/nbd1 00:06:04.005 05:03:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.005 05:03:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.005 05:03:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:04.005 05:03:22 -- common/autotest_common.sh@857 -- # local i 00:06:04.005 05:03:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:04.005 05:03:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:04.005 05:03:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:04.005 05:03:22 -- common/autotest_common.sh@861 -- # break 00:06:04.005 05:03:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:04.005 05:03:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:04.005 05:03:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.005 1+0 records in 00:06:04.005 1+0 records out 00:06:04.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207199 s, 19.8 MB/s 00:06:04.005 05:03:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.005 05:03:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:04.005 05:03:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.005 05:03:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:04.005 05:03:23 -- common/autotest_common.sh@877 -- # return 0 00:06:04.005 05:03:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.005 05:03:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.005 05:03:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.005 05:03:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.005 05:03:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.264 { 00:06:04.264 "nbd_device": "/dev/nbd0", 00:06:04.264 "bdev_name": "Malloc0" 00:06:04.264 }, 00:06:04.264 { 00:06:04.264 "nbd_device": "/dev/nbd1", 00:06:04.264 "bdev_name": "Malloc1" 00:06:04.264 } 00:06:04.264 ]' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.264 { 00:06:04.264 "nbd_device": "/dev/nbd0", 00:06:04.264 "bdev_name": "Malloc0" 00:06:04.264 }, 00:06:04.264 { 00:06:04.264 "nbd_device": "/dev/nbd1", 00:06:04.264 "bdev_name": "Malloc1" 00:06:04.264 } 00:06:04.264 ]' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.264 /dev/nbd1' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.264 /dev/nbd1' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.264 256+0 records in 00:06:04.264 256+0 records out 00:06:04.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055612 s, 189 MB/s 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.264 256+0 records in 00:06:04.264 256+0 records out 00:06:04.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293553 s, 35.7 MB/s 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.264 256+0 records in 00:06:04.264 256+0 records out 00:06:04.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310125 s, 33.8 MB/s 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@51 -- # local i 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.264 05:03:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@41 -- # break 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.522 05:03:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@41 -- # break 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.781 05:03:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@65 -- # true 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.039 05:03:23 -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.039 05:03:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.298 05:03:24 -- event/event.sh@35 -- # sleep 3 00:06:07.198 [2024-07-26 05:03:25.786040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.198 [2024-07-26 05:03:26.038895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.198 [2024-07-26 05:03:26.038894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.198 [2024-07-26 05:03:26.303378] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.198 [2024-07-26 05:03:26.303460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.574 spdk_app_start Round 1 00:06:08.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.574 05:03:27 -- event/event.sh@23 -- # for i in {0..2} 00:06:08.574 05:03:27 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.574 05:03:27 -- event/event.sh@25 -- # waitforlisten 57509 /var/tmp/spdk-nbd.sock 00:06:08.574 05:03:27 -- common/autotest_common.sh@819 -- # '[' -z 57509 ']' 00:06:08.574 05:03:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.574 05:03:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.574 05:03:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.574 05:03:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.574 05:03:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.574 05:03:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.574 05:03:27 -- common/autotest_common.sh@852 -- # return 0 00:06:08.574 05:03:27 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.833 Malloc0 00:06:08.833 05:03:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.091 Malloc1 00:06:09.091 05:03:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@12 -- # local i 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.091 05:03:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.350 /dev/nbd0 00:06:09.350 05:03:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.350 05:03:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.350 05:03:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:09.350 05:03:28 -- common/autotest_common.sh@857 -- # local i 00:06:09.350 05:03:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.350 05:03:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.350 05:03:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:09.350 05:03:28 -- common/autotest_common.sh@861 -- # break 00:06:09.350 05:03:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.350 05:03:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.350 05:03:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.350 1+0 records in 00:06:09.350 1+0 records out 00:06:09.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267567 s, 15.3 MB/s 00:06:09.350 05:03:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.350 05:03:28 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.350 05:03:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.350 05:03:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.350 05:03:28 -- common/autotest_common.sh@877 -- # return 0 00:06:09.350 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.350 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.350 05:03:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.609 /dev/nbd1 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.609 05:03:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.609 05:03:28 -- common/autotest_common.sh@857 -- # local i 00:06:09.609 05:03:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.609 05:03:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.609 05:03:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.609 05:03:28 -- common/autotest_common.sh@861 -- # break 00:06:09.609 05:03:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.609 05:03:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.609 05:03:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.609 1+0 records in 00:06:09.609 1+0 records out 00:06:09.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317841 s, 12.9 MB/s 00:06:09.609 05:03:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.609 05:03:28 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.609 05:03:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.609 05:03:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.609 05:03:28 -- common/autotest_common.sh@877 -- # return 0 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.609 05:03:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.868 { 00:06:09.868 "nbd_device": "/dev/nbd0", 00:06:09.868 "bdev_name": "Malloc0" 00:06:09.868 }, 00:06:09.868 { 00:06:09.868 "nbd_device": "/dev/nbd1", 00:06:09.868 "bdev_name": "Malloc1" 00:06:09.868 } 00:06:09.868 ]' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.868 { 00:06:09.868 "nbd_device": "/dev/nbd0", 00:06:09.868 "bdev_name": "Malloc0" 00:06:09.868 }, 00:06:09.868 { 00:06:09.868 "nbd_device": "/dev/nbd1", 00:06:09.868 "bdev_name": "Malloc1" 00:06:09.868 } 00:06:09.868 ]' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.868 /dev/nbd1' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.868 /dev/nbd1' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.868 256+0 records in 00:06:09.868 256+0 records out 00:06:09.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610239 s, 172 MB/s 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.868 256+0 records in 00:06:09.868 256+0 records out 00:06:09.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320527 s, 32.7 MB/s 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.868 256+0 records in 00:06:09.868 256+0 records out 00:06:09.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274933 s, 38.1 MB/s 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.868 05:03:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.869 05:03:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@41 -- # break 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.127 05:03:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.386 05:03:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.386 05:03:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.386 05:03:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.386 05:03:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@41 -- # break 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.387 05:03:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@65 -- # true 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.646 05:03:29 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.646 05:03:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.221 05:03:30 -- event/event.sh@35 -- # sleep 3 00:06:12.597 [2024-07-26 05:03:31.536691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.856 [2024-07-26 05:03:31.816188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.856 [2024-07-26 05:03:31.816198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.115 [2024-07-26 05:03:32.083292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.115 [2024-07-26 05:03:32.083404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.050 spdk_app_start Round 2 00:06:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.050 05:03:33 -- event/event.sh@23 -- # for i in {0..2} 00:06:14.050 05:03:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.050 05:03:33 -- event/event.sh@25 -- # waitforlisten 57509 /var/tmp/spdk-nbd.sock 00:06:14.050 05:03:33 -- common/autotest_common.sh@819 -- # '[' -z 57509 ']' 00:06:14.050 05:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.050 05:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.050 05:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.050 05:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.050 05:03:33 -- common/autotest_common.sh@10 -- # set +x 00:06:14.309 05:03:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.309 05:03:33 -- common/autotest_common.sh@852 -- # return 0 00:06:14.309 05:03:33 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.567 Malloc0 00:06:14.567 05:03:33 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.825 Malloc1 00:06:14.825 05:03:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.825 05:03:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.084 /dev/nbd0 00:06:15.084 05:03:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.084 05:03:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.084 05:03:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:15.084 05:03:34 -- common/autotest_common.sh@857 -- # local i 00:06:15.084 05:03:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:15.084 05:03:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:15.084 05:03:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:15.084 05:03:34 -- common/autotest_common.sh@861 -- # break 00:06:15.084 05:03:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:15.084 05:03:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:15.084 05:03:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.084 1+0 records in 00:06:15.084 1+0 records out 00:06:15.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106492 s, 3.8 MB/s 00:06:15.084 05:03:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.084 05:03:34 -- common/autotest_common.sh@874 -- # size=4096 00:06:15.084 05:03:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.084 05:03:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:15.084 05:03:34 -- common/autotest_common.sh@877 -- # return 0 00:06:15.084 05:03:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.084 05:03:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.084 05:03:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.343 /dev/nbd1 00:06:15.343 05:03:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.343 05:03:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.343 05:03:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:15.343 05:03:34 -- common/autotest_common.sh@857 -- # local i 00:06:15.343 05:03:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:15.343 05:03:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:15.343 05:03:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:15.343 05:03:34 -- common/autotest_common.sh@861 -- # break 00:06:15.343 05:03:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:15.343 05:03:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:15.343 05:03:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.343 1+0 records in 00:06:15.343 1+0 records out 00:06:15.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317699 s, 12.9 MB/s 00:06:15.343 05:03:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.343 05:03:34 -- common/autotest_common.sh@874 -- # size=4096 00:06:15.343 05:03:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.343 05:03:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:15.343 05:03:34 -- common/autotest_common.sh@877 -- # return 0 00:06:15.343 05:03:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.344 05:03:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.344 05:03:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.344 05:03:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.344 05:03:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.602 { 00:06:15.602 "nbd_device": "/dev/nbd0", 00:06:15.602 "bdev_name": "Malloc0" 00:06:15.602 }, 00:06:15.602 { 00:06:15.602 "nbd_device": "/dev/nbd1", 00:06:15.602 "bdev_name": "Malloc1" 00:06:15.602 } 00:06:15.602 ]' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.602 { 00:06:15.602 "nbd_device": "/dev/nbd0", 00:06:15.602 "bdev_name": "Malloc0" 00:06:15.602 }, 00:06:15.602 { 00:06:15.602 "nbd_device": "/dev/nbd1", 00:06:15.602 "bdev_name": "Malloc1" 00:06:15.602 } 00:06:15.602 ]' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.602 /dev/nbd1' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.602 /dev/nbd1' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.602 05:03:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.603 256+0 records in 00:06:15.603 256+0 records out 00:06:15.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778043 s, 135 MB/s 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.603 256+0 records in 00:06:15.603 256+0 records out 00:06:15.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306837 s, 34.2 MB/s 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.603 05:03:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.603 256+0 records in 00:06:15.603 256+0 records out 00:06:15.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278045 s, 37.7 MB/s 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@41 -- # break 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.862 05:03:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@41 -- # break 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.121 05:03:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@65 -- # true 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.380 05:03:35 -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.380 05:03:35 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.948 05:03:35 -- event/event.sh@35 -- # sleep 3 00:06:18.325 [2024-07-26 05:03:37.354065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.584 [2024-07-26 05:03:37.629418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.584 [2024-07-26 05:03:37.629419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.843 [2024-07-26 05:03:37.906425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.843 [2024-07-26 05:03:37.906512] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.781 05:03:38 -- event/event.sh@38 -- # waitforlisten 57509 /var/tmp/spdk-nbd.sock 00:06:19.781 05:03:38 -- common/autotest_common.sh@819 -- # '[' -z 57509 ']' 00:06:19.781 05:03:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.781 05:03:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.781 05:03:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.781 05:03:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.781 05:03:38 -- common/autotest_common.sh@10 -- # set +x 00:06:20.040 05:03:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.040 05:03:39 -- common/autotest_common.sh@852 -- # return 0 00:06:20.040 05:03:39 -- event/event.sh@39 -- # killprocess 57509 00:06:20.040 05:03:39 -- common/autotest_common.sh@926 -- # '[' -z 57509 ']' 00:06:20.040 05:03:39 -- common/autotest_common.sh@930 -- # kill -0 57509 00:06:20.040 05:03:39 -- common/autotest_common.sh@931 -- # uname 00:06:20.040 05:03:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.040 05:03:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57509 00:06:20.040 killing process with pid 57509 00:06:20.040 05:03:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.040 05:03:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.040 05:03:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57509' 00:06:20.040 05:03:39 -- common/autotest_common.sh@945 -- # kill 57509 00:06:20.040 05:03:39 -- common/autotest_common.sh@950 -- # wait 57509 00:06:21.417 spdk_app_start is called in Round 0. 00:06:21.417 Shutdown signal received, stop current app iteration 00:06:21.417 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:21.417 spdk_app_start is called in Round 1. 00:06:21.417 Shutdown signal received, stop current app iteration 00:06:21.417 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:21.417 spdk_app_start is called in Round 2. 00:06:21.417 Shutdown signal received, stop current app iteration 00:06:21.417 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:21.417 spdk_app_start is called in Round 3. 00:06:21.417 Shutdown signal received, stop current app iteration 00:06:21.417 ************************************ 00:06:21.417 END TEST app_repeat 00:06:21.417 ************************************ 00:06:21.417 05:03:40 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.417 05:03:40 -- event/event.sh@42 -- # return 0 00:06:21.417 00:06:21.417 real 0m20.114s 00:06:21.417 user 0m40.599s 00:06:21.417 sys 0m3.192s 00:06:21.417 05:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.417 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.417 05:03:40 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:21.417 05:03:40 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.417 05:03:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.417 05:03:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.417 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 ************************************ 00:06:21.677 START TEST cpu_locks 00:06:21.677 ************************************ 00:06:21.677 05:03:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:21.677 * Looking for test storage... 00:06:21.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.677 05:03:40 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:21.677 05:03:40 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:21.677 05:03:40 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:21.677 05:03:40 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:21.677 05:03:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.677 05:03:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.677 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 ************************************ 00:06:21.677 START TEST default_locks 00:06:21.677 ************************************ 00:06:21.677 05:03:40 -- common/autotest_common.sh@1104 -- # default_locks 00:06:21.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.677 05:03:40 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57956 00:06:21.677 05:03:40 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.677 05:03:40 -- event/cpu_locks.sh@47 -- # waitforlisten 57956 00:06:21.677 05:03:40 -- common/autotest_common.sh@819 -- # '[' -z 57956 ']' 00:06:21.677 05:03:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.677 05:03:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.677 05:03:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.677 05:03:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.677 05:03:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.677 [2024-07-26 05:03:40.758588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:21.677 [2024-07-26 05:03:40.759005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57956 ] 00:06:21.936 [2024-07-26 05:03:40.940410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.196 [2024-07-26 05:03:41.170134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.196 [2024-07-26 05:03:41.170546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.574 05:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.574 05:03:42 -- common/autotest_common.sh@852 -- # return 0 00:06:23.574 05:03:42 -- event/cpu_locks.sh@49 -- # locks_exist 57956 00:06:23.574 05:03:42 -- event/cpu_locks.sh@22 -- # lslocks -p 57956 00:06:23.574 05:03:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.833 05:03:42 -- event/cpu_locks.sh@50 -- # killprocess 57956 00:06:23.833 05:03:42 -- common/autotest_common.sh@926 -- # '[' -z 57956 ']' 00:06:23.833 05:03:42 -- common/autotest_common.sh@930 -- # kill -0 57956 00:06:23.833 05:03:42 -- common/autotest_common.sh@931 -- # uname 00:06:23.833 05:03:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.833 05:03:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57956 00:06:23.833 killing process with pid 57956 00:06:23.833 05:03:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.833 05:03:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.833 05:03:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57956' 00:06:23.833 05:03:42 -- common/autotest_common.sh@945 -- # kill 57956 00:06:23.833 05:03:42 -- common/autotest_common.sh@950 -- # wait 57956 00:06:26.382 05:03:45 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57956 00:06:26.382 05:03:45 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.382 05:03:45 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 57956 00:06:26.382 05:03:45 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:26.382 05:03:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.382 05:03:45 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:26.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.382 05:03:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.382 05:03:45 -- common/autotest_common.sh@643 -- # waitforlisten 57956 00:06:26.382 05:03:45 -- common/autotest_common.sh@819 -- # '[' -z 57956 ']' 00:06:26.382 05:03:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.382 05:03:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.382 05:03:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.382 05:03:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.382 ERROR: process (pid: 57956) is no longer running 00:06:26.382 05:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.382 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (57956) - No such process 00:06:26.382 05:03:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.382 05:03:45 -- common/autotest_common.sh@852 -- # return 1 00:06:26.382 05:03:45 -- common/autotest_common.sh@643 -- # es=1 00:06:26.382 05:03:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.382 05:03:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.382 05:03:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.382 05:03:45 -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.382 05:03:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.382 05:03:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.382 05:03:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.382 00:06:26.382 real 0m4.671s 00:06:26.382 user 0m4.797s 00:06:26.382 sys 0m0.768s 00:06:26.382 05:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.382 05:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.382 ************************************ 00:06:26.382 END TEST default_locks 00:06:26.382 ************************************ 00:06:26.382 05:03:45 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.382 05:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.383 05:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.383 05:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.383 ************************************ 00:06:26.383 START TEST default_locks_via_rpc 00:06:26.383 ************************************ 00:06:26.383 05:03:45 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:26.383 05:03:45 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58033 00:06:26.383 05:03:45 -- event/cpu_locks.sh@63 -- # waitforlisten 58033 00:06:26.383 05:03:45 -- common/autotest_common.sh@819 -- # '[' -z 58033 ']' 00:06:26.383 05:03:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.383 05:03:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.383 05:03:45 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.383 05:03:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.383 05:03:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.383 05:03:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.383 [2024-07-26 05:03:45.486988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.383 [2024-07-26 05:03:45.487232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:06:26.641 [2024-07-26 05:03:45.652883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.898 [2024-07-26 05:03:45.890179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.898 [2024-07-26 05:03:45.890391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.328 05:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.328 05:03:47 -- common/autotest_common.sh@852 -- # return 0 00:06:28.328 05:03:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.328 05:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.328 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.328 05:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.328 05:03:47 -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.328 05:03:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.329 05:03:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.329 05:03:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.329 05:03:47 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.329 05:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:28.329 05:03:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.329 05:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:28.329 05:03:47 -- event/cpu_locks.sh@71 -- # locks_exist 58033 00:06:28.329 05:03:47 -- event/cpu_locks.sh@22 -- # lslocks -p 58033 00:06:28.329 05:03:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.614 05:03:47 -- event/cpu_locks.sh@73 -- # killprocess 58033 00:06:28.614 05:03:47 -- common/autotest_common.sh@926 -- # '[' -z 58033 ']' 00:06:28.614 05:03:47 -- common/autotest_common.sh@930 -- # kill -0 58033 00:06:28.614 05:03:47 -- common/autotest_common.sh@931 -- # uname 00:06:28.614 05:03:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:28.614 05:03:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58033 00:06:28.614 killing process with pid 58033 00:06:28.614 05:03:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:28.614 05:03:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:28.614 05:03:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58033' 00:06:28.614 05:03:47 -- common/autotest_common.sh@945 -- # kill 58033 00:06:28.614 05:03:47 -- common/autotest_common.sh@950 -- # wait 58033 00:06:31.147 00:06:31.147 real 0m4.569s 00:06:31.147 user 0m4.757s 00:06:31.147 sys 0m0.688s 00:06:31.147 05:03:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.147 05:03:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.147 ************************************ 00:06:31.147 END TEST default_locks_via_rpc 00:06:31.147 ************************************ 00:06:31.147 05:03:49 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.147 05:03:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.147 05:03:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.147 05:03:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.147 ************************************ 00:06:31.147 START TEST non_locking_app_on_locked_coremask 00:06:31.147 ************************************ 00:06:31.147 05:03:49 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:31.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.147 05:03:49 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58120 00:06:31.147 05:03:49 -- event/cpu_locks.sh@81 -- # waitforlisten 58120 /var/tmp/spdk.sock 00:06:31.147 05:03:49 -- common/autotest_common.sh@819 -- # '[' -z 58120 ']' 00:06:31.147 05:03:49 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.147 05:03:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.147 05:03:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.147 05:03:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.147 05:03:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.147 05:03:49 -- common/autotest_common.sh@10 -- # set +x 00:06:31.147 [2024-07-26 05:03:50.079646] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.147 [2024-07-26 05:03:50.079984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58120 ] 00:06:31.147 [2024-07-26 05:03:50.239483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.406 [2024-07-26 05:03:50.470434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.406 [2024-07-26 05:03:50.470823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.781 05:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.781 05:03:51 -- common/autotest_common.sh@852 -- # return 0 00:06:32.781 05:03:51 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58140 00:06:32.781 05:03:51 -- event/cpu_locks.sh@85 -- # waitforlisten 58140 /var/tmp/spdk2.sock 00:06:32.781 05:03:51 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.781 05:03:51 -- common/autotest_common.sh@819 -- # '[' -z 58140 ']' 00:06:32.781 05:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.781 05:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.781 05:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.781 05:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.781 05:03:51 -- common/autotest_common.sh@10 -- # set +x 00:06:32.781 [2024-07-26 05:03:51.791756] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:32.781 [2024-07-26 05:03:51.792045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58140 ] 00:06:33.039 [2024-07-26 05:03:51.962358] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.039 [2024-07-26 05:03:51.962412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.607 [2024-07-26 05:03:52.416934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.607 [2024-07-26 05:03:52.417129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.558 05:03:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.558 05:03:54 -- common/autotest_common.sh@852 -- # return 0 00:06:35.558 05:03:54 -- event/cpu_locks.sh@87 -- # locks_exist 58120 00:06:35.558 05:03:54 -- event/cpu_locks.sh@22 -- # lslocks -p 58120 00:06:35.558 05:03:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.994 05:03:55 -- event/cpu_locks.sh@89 -- # killprocess 58120 00:06:36.994 05:03:55 -- common/autotest_common.sh@926 -- # '[' -z 58120 ']' 00:06:36.994 05:03:55 -- common/autotest_common.sh@930 -- # kill -0 58120 00:06:36.994 05:03:55 -- common/autotest_common.sh@931 -- # uname 00:06:36.994 05:03:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.994 05:03:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58120 00:06:36.994 killing process with pid 58120 00:06:36.994 05:03:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.994 05:03:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.994 05:03:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58120' 00:06:36.994 05:03:55 -- common/autotest_common.sh@945 -- # kill 58120 00:06:36.994 05:03:55 -- common/autotest_common.sh@950 -- # wait 58120 00:06:42.278 05:04:00 -- event/cpu_locks.sh@90 -- # killprocess 58140 00:06:42.278 05:04:00 -- common/autotest_common.sh@926 -- # '[' -z 58140 ']' 00:06:42.278 05:04:00 -- common/autotest_common.sh@930 -- # kill -0 58140 00:06:42.278 05:04:00 -- common/autotest_common.sh@931 -- # uname 00:06:42.278 05:04:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.278 05:04:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58140 00:06:42.278 killing process with pid 58140 00:06:42.278 05:04:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:42.278 05:04:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:42.278 05:04:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58140' 00:06:42.278 05:04:00 -- common/autotest_common.sh@945 -- # kill 58140 00:06:42.278 05:04:00 -- common/autotest_common.sh@950 -- # wait 58140 00:06:44.181 ************************************ 00:06:44.181 END TEST non_locking_app_on_locked_coremask 00:06:44.181 ************************************ 00:06:44.181 00:06:44.181 real 0m13.064s 00:06:44.181 user 0m13.838s 00:06:44.181 sys 0m1.610s 00:06:44.181 05:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.181 05:04:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.181 05:04:03 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:44.181 05:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.181 05:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.181 05:04:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.181 ************************************ 00:06:44.181 START TEST locking_app_on_unlocked_coremask 00:06:44.181 ************************************ 00:06:44.181 05:04:03 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:44.181 05:04:03 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58309 00:06:44.181 05:04:03 -- event/cpu_locks.sh@99 -- # waitforlisten 58309 /var/tmp/spdk.sock 00:06:44.181 05:04:03 -- common/autotest_common.sh@819 -- # '[' -z 58309 ']' 00:06:44.181 05:04:03 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:44.181 05:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.181 05:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.181 05:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.181 05:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.181 05:04:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.181 [2024-07-26 05:04:03.201017] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.181 [2024-07-26 05:04:03.201134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58309 ] 00:06:44.446 [2024-07-26 05:04:03.364495] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.446 [2024-07-26 05:04:03.364549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.704 [2024-07-26 05:04:03.606360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.704 [2024-07-26 05:04:03.606568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.081 05:04:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.081 05:04:04 -- common/autotest_common.sh@852 -- # return 0 00:06:46.081 05:04:04 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.081 05:04:04 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58335 00:06:46.081 05:04:04 -- event/cpu_locks.sh@103 -- # waitforlisten 58335 /var/tmp/spdk2.sock 00:06:46.081 05:04:04 -- common/autotest_common.sh@819 -- # '[' -z 58335 ']' 00:06:46.081 05:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.081 05:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.081 05:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.081 05:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.081 05:04:04 -- common/autotest_common.sh@10 -- # set +x 00:06:46.081 [2024-07-26 05:04:04.956051] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.081 [2024-07-26 05:04:04.956571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ] 00:06:46.081 [2024-07-26 05:04:05.122828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.650 [2024-07-26 05:04:05.599463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.650 [2024-07-26 05:04:05.599661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.183 05:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.183 05:04:07 -- common/autotest_common.sh@852 -- # return 0 00:06:49.183 05:04:07 -- event/cpu_locks.sh@105 -- # locks_exist 58335 00:06:49.183 05:04:07 -- event/cpu_locks.sh@22 -- # lslocks -p 58335 00:06:49.183 05:04:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.117 05:04:08 -- event/cpu_locks.sh@107 -- # killprocess 58309 00:06:50.117 05:04:08 -- common/autotest_common.sh@926 -- # '[' -z 58309 ']' 00:06:50.117 05:04:08 -- common/autotest_common.sh@930 -- # kill -0 58309 00:06:50.117 05:04:08 -- common/autotest_common.sh@931 -- # uname 00:06:50.117 05:04:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.117 05:04:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58309 00:06:50.117 05:04:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.117 05:04:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.117 killing process with pid 58309 00:06:50.117 05:04:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58309' 00:06:50.117 05:04:09 -- common/autotest_common.sh@945 -- # kill 58309 00:06:50.117 05:04:09 -- common/autotest_common.sh@950 -- # wait 58309 00:06:55.485 05:04:14 -- event/cpu_locks.sh@108 -- # killprocess 58335 00:06:55.485 05:04:14 -- common/autotest_common.sh@926 -- # '[' -z 58335 ']' 00:06:55.485 05:04:14 -- common/autotest_common.sh@930 -- # kill -0 58335 00:06:55.485 05:04:14 -- common/autotest_common.sh@931 -- # uname 00:06:55.485 05:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.485 05:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58335 00:06:55.485 05:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:55.485 05:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:55.485 05:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58335' 00:06:55.485 killing process with pid 58335 00:06:55.485 05:04:14 -- common/autotest_common.sh@945 -- # kill 58335 00:06:55.485 05:04:14 -- common/autotest_common.sh@950 -- # wait 58335 00:06:58.015 00:06:58.016 real 0m13.741s 00:06:58.016 user 0m14.700s 00:06:58.016 sys 0m1.596s 00:06:58.016 05:04:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.016 ************************************ 00:06:58.016 END TEST locking_app_on_unlocked_coremask 00:06:58.016 ************************************ 00:06:58.016 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.016 05:04:16 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.016 05:04:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.016 05:04:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.016 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.016 ************************************ 00:06:58.016 START TEST locking_app_on_locked_coremask 00:06:58.016 ************************************ 00:06:58.016 05:04:16 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:58.016 05:04:16 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58504 00:06:58.016 05:04:16 -- event/cpu_locks.sh@116 -- # waitforlisten 58504 /var/tmp/spdk.sock 00:06:58.016 05:04:16 -- common/autotest_common.sh@819 -- # '[' -z 58504 ']' 00:06:58.016 05:04:16 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.016 05:04:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.016 05:04:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.016 05:04:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.016 05:04:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.016 05:04:16 -- common/autotest_common.sh@10 -- # set +x 00:06:58.016 [2024-07-26 05:04:17.069398] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.016 [2024-07-26 05:04:17.069594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58504 ] 00:06:58.274 [2024-07-26 05:04:17.261485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.532 [2024-07-26 05:04:17.492829] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.532 [2024-07-26 05:04:17.493009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.905 05:04:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.905 05:04:18 -- common/autotest_common.sh@852 -- # return 0 00:06:59.905 05:04:18 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58533 00:06:59.905 05:04:18 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58533 /var/tmp/spdk2.sock 00:06:59.905 05:04:18 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.905 05:04:18 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58533 /var/tmp/spdk2.sock 00:06:59.905 05:04:18 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:59.905 05:04:18 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.905 05:04:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.905 05:04:18 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:59.905 05:04:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.905 05:04:18 -- common/autotest_common.sh@643 -- # waitforlisten 58533 /var/tmp/spdk2.sock 00:06:59.905 05:04:18 -- common/autotest_common.sh@819 -- # '[' -z 58533 ']' 00:06:59.905 05:04:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.905 05:04:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.905 05:04:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.905 05:04:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.905 05:04:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.905 [2024-07-26 05:04:18.728581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:59.905 [2024-07-26 05:04:18.728701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58533 ] 00:06:59.905 [2024-07-26 05:04:18.898736] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58504 has claimed it. 00:06:59.905 [2024-07-26 05:04:18.898795] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.470 ERROR: process (pid: 58533) is no longer running 00:07:00.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58533) - No such process 00:07:00.470 05:04:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:00.470 05:04:19 -- common/autotest_common.sh@852 -- # return 1 00:07:00.470 05:04:19 -- common/autotest_common.sh@643 -- # es=1 00:07:00.470 05:04:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.470 05:04:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:00.470 05:04:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.470 05:04:19 -- event/cpu_locks.sh@122 -- # locks_exist 58504 00:07:00.470 05:04:19 -- event/cpu_locks.sh@22 -- # lslocks -p 58504 00:07:00.470 05:04:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.034 05:04:19 -- event/cpu_locks.sh@124 -- # killprocess 58504 00:07:01.034 05:04:19 -- common/autotest_common.sh@926 -- # '[' -z 58504 ']' 00:07:01.034 05:04:19 -- common/autotest_common.sh@930 -- # kill -0 58504 00:07:01.034 05:04:19 -- common/autotest_common.sh@931 -- # uname 00:07:01.034 05:04:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:01.034 05:04:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58504 00:07:01.034 05:04:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:01.034 killing process with pid 58504 00:07:01.034 05:04:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:01.034 05:04:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58504' 00:07:01.034 05:04:19 -- common/autotest_common.sh@945 -- # kill 58504 00:07:01.034 05:04:19 -- common/autotest_common.sh@950 -- # wait 58504 00:07:03.560 00:07:03.560 real 0m5.471s 00:07:03.560 user 0m5.864s 00:07:03.560 sys 0m0.937s 00:07:03.560 05:04:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.560 ************************************ 00:07:03.560 END TEST locking_app_on_locked_coremask 00:07:03.560 05:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.560 ************************************ 00:07:03.560 05:04:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:03.560 05:04:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:03.560 05:04:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.560 05:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.560 ************************************ 00:07:03.560 START TEST locking_overlapped_coremask 00:07:03.560 ************************************ 00:07:03.560 05:04:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:03.560 05:04:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58602 00:07:03.560 05:04:22 -- event/cpu_locks.sh@133 -- # waitforlisten 58602 /var/tmp/spdk.sock 00:07:03.560 05:04:22 -- common/autotest_common.sh@819 -- # '[' -z 58602 ']' 00:07:03.560 05:04:22 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:03.560 05:04:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.560 05:04:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:03.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.560 05:04:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.560 05:04:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:03.560 05:04:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.560 [2024-07-26 05:04:22.595417] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.560 [2024-07-26 05:04:22.595566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 00:07:03.818 [2024-07-26 05:04:22.777540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.077 [2024-07-26 05:04:23.010964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.077 [2024-07-26 05:04:23.011315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.077 [2024-07-26 05:04:23.011589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.077 [2024-07-26 05:04:23.011620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.449 05:04:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.449 05:04:24 -- common/autotest_common.sh@852 -- # return 0 00:07:05.449 05:04:24 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.449 05:04:24 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58629 00:07:05.449 05:04:24 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58629 /var/tmp/spdk2.sock 00:07:05.449 05:04:24 -- common/autotest_common.sh@640 -- # local es=0 00:07:05.449 05:04:24 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 58629 /var/tmp/spdk2.sock 00:07:05.449 05:04:24 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:05.449 05:04:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:05.449 05:04:24 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:05.449 05:04:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:05.449 05:04:24 -- common/autotest_common.sh@643 -- # waitforlisten 58629 /var/tmp/spdk2.sock 00:07:05.449 05:04:24 -- common/autotest_common.sh@819 -- # '[' -z 58629 ']' 00:07:05.449 05:04:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.449 05:04:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.449 05:04:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.449 05:04:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.449 05:04:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.449 [2024-07-26 05:04:24.247867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:05.449 [2024-07-26 05:04:24.248016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58629 ] 00:07:05.449 [2024-07-26 05:04:24.418823] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58602 has claimed it. 00:07:05.449 [2024-07-26 05:04:24.418885] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.014 ERROR: process (pid: 58629) is no longer running 00:07:06.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (58629) - No such process 00:07:06.014 05:04:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.014 05:04:24 -- common/autotest_common.sh@852 -- # return 1 00:07:06.014 05:04:24 -- common/autotest_common.sh@643 -- # es=1 00:07:06.014 05:04:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:06.014 05:04:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:06.014 05:04:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:06.014 05:04:24 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.014 05:04:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.014 05:04:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.014 05:04:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.014 05:04:24 -- event/cpu_locks.sh@141 -- # killprocess 58602 00:07:06.014 05:04:24 -- common/autotest_common.sh@926 -- # '[' -z 58602 ']' 00:07:06.014 05:04:24 -- common/autotest_common.sh@930 -- # kill -0 58602 00:07:06.014 05:04:24 -- common/autotest_common.sh@931 -- # uname 00:07:06.014 05:04:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:06.014 05:04:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58602 00:07:06.014 05:04:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:06.014 05:04:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:06.014 05:04:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58602' 00:07:06.014 killing process with pid 58602 00:07:06.014 05:04:24 -- common/autotest_common.sh@945 -- # kill 58602 00:07:06.014 05:04:24 -- common/autotest_common.sh@950 -- # wait 58602 00:07:08.545 00:07:08.545 real 0m4.946s 00:07:08.545 user 0m13.047s 00:07:08.545 sys 0m0.681s 00:07:08.545 ************************************ 00:07:08.545 END TEST locking_overlapped_coremask 00:07:08.545 ************************************ 00:07:08.545 05:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.545 05:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 05:04:27 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:08.545 05:04:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.545 05:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.545 05:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 ************************************ 00:07:08.545 START TEST locking_overlapped_coremask_via_rpc 00:07:08.545 ************************************ 00:07:08.545 05:04:27 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:08.545 05:04:27 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58693 00:07:08.545 05:04:27 -- event/cpu_locks.sh@149 -- # waitforlisten 58693 /var/tmp/spdk.sock 00:07:08.545 05:04:27 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:08.545 05:04:27 -- common/autotest_common.sh@819 -- # '[' -z 58693 ']' 00:07:08.545 05:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.545 05:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.545 05:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.545 05:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.545 05:04:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.545 [2024-07-26 05:04:27.615298] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.545 [2024-07-26 05:04:27.615456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58693 ] 00:07:08.806 [2024-07-26 05:04:27.797234] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.807 [2024-07-26 05:04:27.797282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.065 [2024-07-26 05:04:28.031069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.065 [2024-07-26 05:04:28.031409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.065 [2024-07-26 05:04:28.031768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.065 [2024-07-26 05:04:28.031788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.439 05:04:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.439 05:04:29 -- common/autotest_common.sh@852 -- # return 0 00:07:10.439 05:04:29 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58724 00:07:10.439 05:04:29 -- event/cpu_locks.sh@153 -- # waitforlisten 58724 /var/tmp/spdk2.sock 00:07:10.439 05:04:29 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.439 05:04:29 -- common/autotest_common.sh@819 -- # '[' -z 58724 ']' 00:07:10.439 05:04:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.439 05:04:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.439 05:04:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.439 05:04:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.439 05:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.439 [2024-07-26 05:04:29.321672] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.439 [2024-07-26 05:04:29.322048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58724 ] 00:07:10.439 [2024-07-26 05:04:29.496375] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.440 [2024-07-26 05:04:29.496428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.006 [2024-07-26 05:04:29.981955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.006 [2024-07-26 05:04:29.982397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.006 [2024-07-26 05:04:29.985889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.006 [2024-07-26 05:04:29.985913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:13.538 05:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.538 05:04:32 -- common/autotest_common.sh@852 -- # return 0 00:07:13.538 05:04:32 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.538 05:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:13.538 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 05:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:13.538 05:04:32 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.538 05:04:32 -- common/autotest_common.sh@640 -- # local es=0 00:07:13.538 05:04:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.538 05:04:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:13.538 05:04:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:13.538 05:04:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:13.538 05:04:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:13.538 05:04:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.538 05:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:13.538 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 [2024-07-26 05:04:32.232497] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58693 has claimed it. 00:07:13.538 request: 00:07:13.538 { 00:07:13.538 "method": "framework_enable_cpumask_locks", 00:07:13.538 "req_id": 1 00:07:13.538 } 00:07:13.538 Got JSON-RPC error response 00:07:13.538 response: 00:07:13.538 { 00:07:13.538 "code": -32603, 00:07:13.538 "message": "Failed to claim CPU core: 2" 00:07:13.538 } 00:07:13.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.538 05:04:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:13.538 05:04:32 -- common/autotest_common.sh@643 -- # es=1 00:07:13.538 05:04:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:13.538 05:04:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:13.538 05:04:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:13.538 05:04:32 -- event/cpu_locks.sh@158 -- # waitforlisten 58693 /var/tmp/spdk.sock 00:07:13.538 05:04:32 -- common/autotest_common.sh@819 -- # '[' -z 58693 ']' 00:07:13.538 05:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.538 05:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.538 05:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.538 05:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.538 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.538 05:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.538 05:04:32 -- common/autotest_common.sh@852 -- # return 0 00:07:13.538 05:04:32 -- event/cpu_locks.sh@159 -- # waitforlisten 58724 /var/tmp/spdk2.sock 00:07:13.538 05:04:32 -- common/autotest_common.sh@819 -- # '[' -z 58724 ']' 00:07:13.538 05:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.538 05:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:13.538 05:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.538 05:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:13.538 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.797 05:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:13.797 05:04:32 -- common/autotest_common.sh@852 -- # return 0 00:07:13.797 05:04:32 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:13.797 05:04:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:13.797 ************************************ 00:07:13.797 END TEST locking_overlapped_coremask_via_rpc 00:07:13.797 ************************************ 00:07:13.797 05:04:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:13.797 05:04:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:13.797 00:07:13.797 real 0m5.221s 00:07:13.797 user 0m1.815s 00:07:13.797 sys 0m0.327s 00:07:13.797 05:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.797 05:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.797 05:04:32 -- event/cpu_locks.sh@174 -- # cleanup 00:07:13.797 05:04:32 -- event/cpu_locks.sh@15 -- # [[ -z 58693 ]] 00:07:13.797 05:04:32 -- event/cpu_locks.sh@15 -- # killprocess 58693 00:07:13.797 05:04:32 -- common/autotest_common.sh@926 -- # '[' -z 58693 ']' 00:07:13.797 05:04:32 -- common/autotest_common.sh@930 -- # kill -0 58693 00:07:13.797 05:04:32 -- common/autotest_common.sh@931 -- # uname 00:07:13.797 05:04:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:13.797 05:04:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58693 00:07:13.797 killing process with pid 58693 00:07:13.797 05:04:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:13.797 05:04:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:13.797 05:04:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58693' 00:07:13.797 05:04:32 -- common/autotest_common.sh@945 -- # kill 58693 00:07:13.797 05:04:32 -- common/autotest_common.sh@950 -- # wait 58693 00:07:17.087 05:04:35 -- event/cpu_locks.sh@16 -- # [[ -z 58724 ]] 00:07:17.087 05:04:35 -- event/cpu_locks.sh@16 -- # killprocess 58724 00:07:17.087 05:04:35 -- common/autotest_common.sh@926 -- # '[' -z 58724 ']' 00:07:17.087 05:04:35 -- common/autotest_common.sh@930 -- # kill -0 58724 00:07:17.087 05:04:35 -- common/autotest_common.sh@931 -- # uname 00:07:17.087 05:04:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.087 05:04:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58724 00:07:17.087 killing process with pid 58724 00:07:17.087 05:04:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:17.087 05:04:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:17.087 05:04:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58724' 00:07:17.087 05:04:35 -- common/autotest_common.sh@945 -- # kill 58724 00:07:17.087 05:04:35 -- common/autotest_common.sh@950 -- # wait 58724 00:07:19.619 05:04:38 -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.619 05:04:38 -- event/cpu_locks.sh@1 -- # cleanup 00:07:19.619 05:04:38 -- event/cpu_locks.sh@15 -- # [[ -z 58693 ]] 00:07:19.619 05:04:38 -- event/cpu_locks.sh@15 -- # killprocess 58693 00:07:19.619 05:04:38 -- common/autotest_common.sh@926 -- # '[' -z 58693 ']' 00:07:19.619 05:04:38 -- common/autotest_common.sh@930 -- # kill -0 58693 00:07:19.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58693) - No such process 00:07:19.620 Process with pid 58693 is not found 00:07:19.620 05:04:38 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58693 is not found' 00:07:19.620 Process with pid 58724 is not found 00:07:19.620 05:04:38 -- event/cpu_locks.sh@16 -- # [[ -z 58724 ]] 00:07:19.620 05:04:38 -- event/cpu_locks.sh@16 -- # killprocess 58724 00:07:19.620 05:04:38 -- common/autotest_common.sh@926 -- # '[' -z 58724 ']' 00:07:19.620 05:04:38 -- common/autotest_common.sh@930 -- # kill -0 58724 00:07:19.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (58724) - No such process 00:07:19.620 05:04:38 -- common/autotest_common.sh@953 -- # echo 'Process with pid 58724 is not found' 00:07:19.620 05:04:38 -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.620 00:07:19.620 real 0m57.584s 00:07:19.620 user 1m38.229s 00:07:19.620 sys 0m7.818s 00:07:19.620 05:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.620 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 END TEST cpu_locks 00:07:19.620 ************************************ 00:07:19.620 00:07:19.620 real 1m29.571s 00:07:19.620 user 2m35.630s 00:07:19.620 sys 0m12.305s 00:07:19.620 05:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.620 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 END TEST event 00:07:19.620 ************************************ 00:07:19.620 05:04:38 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.620 05:04:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:19.620 05:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.620 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 START TEST thread 00:07:19.620 ************************************ 00:07:19.620 05:04:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.620 * Looking for test storage... 00:07:19.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:19.620 05:04:38 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.620 05:04:38 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:19.620 05:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.620 05:04:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.620 ************************************ 00:07:19.620 START TEST thread_poller_perf 00:07:19.620 ************************************ 00:07:19.620 05:04:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.620 [2024-07-26 05:04:38.342035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:19.620 [2024-07-26 05:04:38.342590] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:07:19.620 [2024-07-26 05:04:38.504832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.878 [2024-07-26 05:04:38.734584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.878 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.256 ====================================== 00:07:21.256 busy:2110651372 (cyc) 00:07:21.256 total_run_count: 371000 00:07:21.256 tsc_hz: 2100000000 (cyc) 00:07:21.256 ====================================== 00:07:21.256 poller_cost: 5689 (cyc), 2709 (nsec) 00:07:21.256 00:07:21.256 real 0m1.853s 00:07:21.256 user 0m1.639s 00:07:21.256 sys 0m0.100s 00:07:21.256 05:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.256 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.256 ************************************ 00:07:21.256 END TEST thread_poller_perf 00:07:21.256 ************************************ 00:07:21.256 05:04:40 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.256 05:04:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:21.256 05:04:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.256 05:04:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.256 ************************************ 00:07:21.256 START TEST thread_poller_perf 00:07:21.256 ************************************ 00:07:21.256 05:04:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.256 [2024-07-26 05:04:40.260601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.256 [2024-07-26 05:04:40.260837] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58963 ] 00:07:21.513 [2024-07-26 05:04:40.429468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.771 [2024-07-26 05:04:40.722121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.771 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.160 ====================================== 00:07:23.160 busy:2104576262 (cyc) 00:07:23.160 total_run_count: 4995000 00:07:23.160 tsc_hz: 2100000000 (cyc) 00:07:23.160 ====================================== 00:07:23.160 poller_cost: 421 (cyc), 200 (nsec) 00:07:23.160 00:07:23.160 real 0m1.924s 00:07:23.160 user 0m1.699s 00:07:23.160 sys 0m0.116s 00:07:23.160 05:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.160 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.160 ************************************ 00:07:23.160 END TEST thread_poller_perf 00:07:23.160 ************************************ 00:07:23.160 05:04:42 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.160 00:07:23.160 real 0m3.983s 00:07:23.160 user 0m3.404s 00:07:23.160 sys 0m0.350s 00:07:23.160 05:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.160 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.160 ************************************ 00:07:23.160 END TEST thread 00:07:23.160 ************************************ 00:07:23.160 05:04:42 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:23.160 05:04:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:23.160 05:04:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.160 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.160 ************************************ 00:07:23.160 START TEST accel 00:07:23.160 ************************************ 00:07:23.160 05:04:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:23.418 * Looking for test storage... 00:07:23.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:23.418 05:04:42 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:23.418 05:04:42 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:23.418 05:04:42 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.418 05:04:42 -- accel/accel.sh@59 -- # spdk_tgt_pid=59043 00:07:23.418 05:04:42 -- accel/accel.sh@60 -- # waitforlisten 59043 00:07:23.418 05:04:42 -- common/autotest_common.sh@819 -- # '[' -z 59043 ']' 00:07:23.418 05:04:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.418 05:04:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.418 05:04:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.418 05:04:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.418 05:04:42 -- common/autotest_common.sh@10 -- # set +x 00:07:23.418 05:04:42 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:23.418 05:04:42 -- accel/accel.sh@58 -- # build_accel_config 00:07:23.418 05:04:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.418 05:04:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.418 05:04:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.418 05:04:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.418 05:04:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.418 05:04:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.418 05:04:42 -- accel/accel.sh@42 -- # jq -r . 00:07:23.418 [2024-07-26 05:04:42.473265] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.418 [2024-07-26 05:04:42.473428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59043 ] 00:07:23.677 [2024-07-26 05:04:42.653458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.935 [2024-07-26 05:04:42.870691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:23.935 [2024-07-26 05:04:42.870880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.872 05:04:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:24.872 05:04:43 -- common/autotest_common.sh@852 -- # return 0 00:07:24.872 05:04:43 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:25.130 05:04:43 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:25.130 05:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:25.130 05:04:43 -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 05:04:43 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:25.130 05:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.130 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.130 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.130 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.131 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.131 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.131 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.131 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # IFS== 00:07:25.131 05:04:44 -- accel/accel.sh@64 -- # read -r opc module 00:07:25.131 05:04:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:25.131 05:04:44 -- accel/accel.sh@67 -- # killprocess 59043 00:07:25.131 05:04:44 -- common/autotest_common.sh@926 -- # '[' -z 59043 ']' 00:07:25.131 05:04:44 -- common/autotest_common.sh@930 -- # kill -0 59043 00:07:25.131 05:04:44 -- common/autotest_common.sh@931 -- # uname 00:07:25.131 05:04:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.131 05:04:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59043 00:07:25.131 05:04:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.131 05:04:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.131 killing process with pid 59043 00:07:25.131 05:04:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59043' 00:07:25.131 05:04:44 -- common/autotest_common.sh@945 -- # kill 59043 00:07:25.131 05:04:44 -- common/autotest_common.sh@950 -- # wait 59043 00:07:27.664 05:04:46 -- accel/accel.sh@68 -- # trap - ERR 00:07:27.664 05:04:46 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:27.664 05:04:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:27.664 05:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.664 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.664 05:04:46 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:27.664 05:04:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:27.664 05:04:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.664 05:04:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.664 05:04:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.664 05:04:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.664 05:04:46 -- accel/accel.sh@42 -- # jq -r . 00:07:27.664 05:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.664 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.664 05:04:46 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:27.664 05:04:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:27.664 05:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.664 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.664 ************************************ 00:07:27.664 START TEST accel_missing_filename 00:07:27.664 ************************************ 00:07:27.664 05:04:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:27.664 05:04:46 -- common/autotest_common.sh@640 -- # local es=0 00:07:27.664 05:04:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:27.664 05:04:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:27.664 05:04:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:27.664 05:04:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:27.664 05:04:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:27.664 05:04:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:27.664 05:04:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:27.664 05:04:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.664 05:04:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.664 05:04:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.664 05:04:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.664 05:04:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.664 05:04:46 -- accel/accel.sh@42 -- # jq -r . 00:07:27.664 [2024-07-26 05:04:46.691535] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.664 [2024-07-26 05:04:46.691683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59126 ] 00:07:27.923 [2024-07-26 05:04:46.871703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.182 [2024-07-26 05:04:47.103703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.441 [2024-07-26 05:04:47.355869] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.009 [2024-07-26 05:04:47.895716] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:29.268 A filename is required. 00:07:29.268 05:04:48 -- common/autotest_common.sh@643 -- # es=234 00:07:29.268 05:04:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:29.268 05:04:48 -- common/autotest_common.sh@652 -- # es=106 00:07:29.268 05:04:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:29.268 05:04:48 -- common/autotest_common.sh@660 -- # es=1 00:07:29.268 ************************************ 00:07:29.268 END TEST accel_missing_filename 00:07:29.268 ************************************ 00:07:29.268 05:04:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:29.268 00:07:29.268 real 0m1.681s 00:07:29.268 user 0m1.425s 00:07:29.268 sys 0m0.187s 00:07:29.268 05:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.268 05:04:48 -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 05:04:48 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.268 05:04:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:29.268 05:04:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.268 05:04:48 -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 ************************************ 00:07:29.268 START TEST accel_compress_verify 00:07:29.268 ************************************ 00:07:29.268 05:04:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.268 05:04:48 -- common/autotest_common.sh@640 -- # local es=0 00:07:29.268 05:04:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.268 05:04:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:29.268 05:04:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.268 05:04:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:29.268 05:04:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.268 05:04:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.527 05:04:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:29.527 05:04:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.527 05:04:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.527 05:04:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.527 05:04:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.527 05:04:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.527 05:04:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.527 05:04:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.527 05:04:48 -- accel/accel.sh@42 -- # jq -r . 00:07:29.527 [2024-07-26 05:04:48.437018] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.527 [2024-07-26 05:04:48.437172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:07:29.527 [2024-07-26 05:04:48.619948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.786 [2024-07-26 05:04:48.854878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.045 [2024-07-26 05:04:49.101420] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.613 [2024-07-26 05:04:49.656458] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:31.181 00:07:31.181 Compression does not support the verify option, aborting. 00:07:31.181 05:04:50 -- common/autotest_common.sh@643 -- # es=161 00:07:31.181 05:04:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:31.181 05:04:50 -- common/autotest_common.sh@652 -- # es=33 00:07:31.181 05:04:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:31.181 05:04:50 -- common/autotest_common.sh@660 -- # es=1 00:07:31.181 05:04:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:31.181 00:07:31.181 real 0m1.721s 00:07:31.181 user 0m1.464s 00:07:31.181 sys 0m0.192s 00:07:31.181 05:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.181 ************************************ 00:07:31.181 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.181 END TEST accel_compress_verify 00:07:31.181 ************************************ 00:07:31.181 05:04:50 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:31.181 05:04:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:31.181 05:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.181 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.181 ************************************ 00:07:31.181 START TEST accel_wrong_workload 00:07:31.181 ************************************ 00:07:31.181 05:04:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:31.181 05:04:50 -- common/autotest_common.sh@640 -- # local es=0 00:07:31.181 05:04:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:31.181 05:04:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:31.181 05:04:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.181 05:04:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:31.181 05:04:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.181 05:04:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:31.181 05:04:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:31.181 05:04:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.181 05:04:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.181 05:04:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.181 05:04:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.181 05:04:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.181 05:04:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.181 05:04:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.181 05:04:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.181 Unsupported workload type: foobar 00:07:31.181 [2024-07-26 05:04:50.206322] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:31.181 accel_perf options: 00:07:31.181 [-h help message] 00:07:31.181 [-q queue depth per core] 00:07:31.181 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:31.181 [-T number of threads per core 00:07:31.181 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:31.181 [-t time in seconds] 00:07:31.181 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:31.181 [ dif_verify, , dif_generate, dif_generate_copy 00:07:31.181 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:31.181 [-l for compress/decompress workloads, name of uncompressed input file 00:07:31.181 [-S for crc32c workload, use this seed value (default 0) 00:07:31.181 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:31.181 [-f for fill workload, use this BYTE value (default 255) 00:07:31.181 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:31.181 [-y verify result if this switch is on] 00:07:31.181 [-a tasks to allocate per core (default: same value as -q)] 00:07:31.181 Can be used to spread operations across a wider range of memory. 00:07:31.181 05:04:50 -- common/autotest_common.sh@643 -- # es=1 00:07:31.181 05:04:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:31.181 05:04:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:31.181 05:04:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:31.181 00:07:31.181 real 0m0.090s 00:07:31.181 user 0m0.079s 00:07:31.181 sys 0m0.045s 00:07:31.181 ************************************ 00:07:31.181 END TEST accel_wrong_workload 00:07:31.181 ************************************ 00:07:31.181 05:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.181 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.181 05:04:50 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:31.181 05:04:50 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:31.181 05:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.181 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.440 ************************************ 00:07:31.440 START TEST accel_negative_buffers 00:07:31.440 ************************************ 00:07:31.440 05:04:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:31.440 05:04:50 -- common/autotest_common.sh@640 -- # local es=0 00:07:31.440 05:04:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:31.440 05:04:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:31.440 05:04:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.440 05:04:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:31.440 05:04:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:31.440 05:04:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:31.440 05:04:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:31.440 05:04:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.440 05:04:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.440 05:04:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.440 05:04:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.440 05:04:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.440 05:04:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.440 05:04:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.440 05:04:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.440 -x option must be non-negative. 00:07:31.440 [2024-07-26 05:04:50.355337] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:31.440 accel_perf options: 00:07:31.440 [-h help message] 00:07:31.440 [-q queue depth per core] 00:07:31.440 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:31.440 [-T number of threads per core 00:07:31.440 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:31.440 [-t time in seconds] 00:07:31.440 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:31.440 [ dif_verify, , dif_generate, dif_generate_copy 00:07:31.440 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:31.440 [-l for compress/decompress workloads, name of uncompressed input file 00:07:31.440 [-S for crc32c workload, use this seed value (default 0) 00:07:31.440 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:31.440 [-f for fill workload, use this BYTE value (default 255) 00:07:31.440 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:31.440 [-y verify result if this switch is on] 00:07:31.440 [-a tasks to allocate per core (default: same value as -q)] 00:07:31.440 Can be used to spread operations across a wider range of memory. 00:07:31.440 05:04:50 -- common/autotest_common.sh@643 -- # es=1 00:07:31.440 05:04:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:31.440 05:04:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:31.440 05:04:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:31.440 00:07:31.440 real 0m0.098s 00:07:31.440 user 0m0.081s 00:07:31.440 sys 0m0.062s 00:07:31.440 05:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.440 ************************************ 00:07:31.440 END TEST accel_negative_buffers 00:07:31.440 ************************************ 00:07:31.440 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.440 05:04:50 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:31.440 05:04:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:31.440 05:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.440 05:04:50 -- common/autotest_common.sh@10 -- # set +x 00:07:31.440 ************************************ 00:07:31.440 START TEST accel_crc32c 00:07:31.440 ************************************ 00:07:31.440 05:04:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:31.440 05:04:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.441 05:04:50 -- accel/accel.sh@17 -- # local accel_module 00:07:31.441 05:04:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:31.441 05:04:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:31.441 05:04:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.441 05:04:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.441 05:04:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.441 05:04:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.441 05:04:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.441 05:04:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.441 05:04:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.441 05:04:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.441 [2024-07-26 05:04:50.495490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:31.441 [2024-07-26 05:04:50.495613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59246 ] 00:07:31.699 [2024-07-26 05:04:50.662461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.956 [2024-07-26 05:04:50.996603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.490 05:04:53 -- accel/accel.sh@18 -- # out=' 00:07:34.490 SPDK Configuration: 00:07:34.490 Core mask: 0x1 00:07:34.490 00:07:34.490 Accel Perf Configuration: 00:07:34.490 Workload Type: crc32c 00:07:34.490 CRC-32C seed: 32 00:07:34.490 Transfer size: 4096 bytes 00:07:34.490 Vector count 1 00:07:34.490 Module: software 00:07:34.490 Queue depth: 32 00:07:34.490 Allocate depth: 32 00:07:34.490 # threads/core: 1 00:07:34.490 Run time: 1 seconds 00:07:34.490 Verify: Yes 00:07:34.490 00:07:34.490 Running for 1 seconds... 00:07:34.490 00:07:34.490 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.490 ------------------------------------------------------------------------------------ 00:07:34.490 0,0 462784/s 1807 MiB/s 0 0 00:07:34.490 ==================================================================================== 00:07:34.490 Total 462784/s 1807 MiB/s 0 0' 00:07:34.490 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:34.490 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:34.490 05:04:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:34.490 05:04:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.490 05:04:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.490 05:04:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.490 05:04:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.490 05:04:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.490 05:04:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.491 05:04:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.491 05:04:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:34.491 05:04:53 -- accel/accel.sh@42 -- # jq -r . 00:07:34.491 [2024-07-26 05:04:53.299401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.491 [2024-07-26 05:04:53.299554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:07:34.491 [2024-07-26 05:04:53.480194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.749 [2024-07-26 05:04:53.708368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=0x1 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=crc32c 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=32 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=software 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=32 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=32 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=1 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val=Yes 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.008 05:04:53 -- accel/accel.sh@21 -- # val= 00:07:35.008 05:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # IFS=: 00:07:35.008 05:04:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@21 -- # val= 00:07:37.541 05:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.541 05:04:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.541 05:04:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.541 05:04:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:37.541 05:04:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.541 00:07:37.541 real 0m5.684s 00:07:37.541 user 0m5.106s 00:07:37.541 sys 0m0.371s 00:07:37.541 05:04:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.541 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.541 ************************************ 00:07:37.541 END TEST accel_crc32c 00:07:37.541 ************************************ 00:07:37.541 05:04:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:37.541 05:04:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.541 05:04:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.541 05:04:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.541 ************************************ 00:07:37.541 START TEST accel_crc32c_C2 00:07:37.541 ************************************ 00:07:37.541 05:04:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:37.541 05:04:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.541 05:04:56 -- accel/accel.sh@17 -- # local accel_module 00:07:37.541 05:04:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:37.541 05:04:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.541 05:04:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:37.541 05:04:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.541 05:04:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.541 05:04:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.541 05:04:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.541 05:04:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.541 05:04:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.541 05:04:56 -- accel/accel.sh@42 -- # jq -r . 00:07:37.541 [2024-07-26 05:04:56.259490] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:37.541 [2024-07-26 05:04:56.259866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59326 ] 00:07:37.541 [2024-07-26 05:04:56.446366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.800 [2024-07-26 05:04:56.767972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.361 05:04:59 -- accel/accel.sh@18 -- # out=' 00:07:40.361 SPDK Configuration: 00:07:40.361 Core mask: 0x1 00:07:40.361 00:07:40.361 Accel Perf Configuration: 00:07:40.361 Workload Type: crc32c 00:07:40.361 CRC-32C seed: 0 00:07:40.361 Transfer size: 4096 bytes 00:07:40.361 Vector count 2 00:07:40.361 Module: software 00:07:40.361 Queue depth: 32 00:07:40.361 Allocate depth: 32 00:07:40.361 # threads/core: 1 00:07:40.361 Run time: 1 seconds 00:07:40.361 Verify: Yes 00:07:40.361 00:07:40.361 Running for 1 seconds... 00:07:40.361 00:07:40.361 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.361 ------------------------------------------------------------------------------------ 00:07:40.361 0,0 331616/s 2590 MiB/s 0 0 00:07:40.361 ==================================================================================== 00:07:40.361 Total 331616/s 1295 MiB/s 0 0' 00:07:40.361 05:04:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 05:04:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 05:04:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:40.361 05:04:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:40.361 05:04:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.361 05:04:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.361 05:04:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.361 05:04:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.361 05:04:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.361 05:04:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.361 05:04:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.361 05:04:59 -- accel/accel.sh@42 -- # jq -r . 00:07:40.361 [2024-07-26 05:04:59.291541] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:40.361 [2024-07-26 05:04:59.291699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:07:40.619 [2024-07-26 05:04:59.473583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.878 [2024-07-26 05:04:59.750855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=0x1 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=crc32c 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=0 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=software 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=32 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=32 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val=1 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.136 05:05:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.136 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.136 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.137 05:05:00 -- accel/accel.sh@21 -- # val=Yes 00:07:41.137 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.137 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.137 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:41.137 05:05:00 -- accel/accel.sh@21 -- # val= 00:07:41.137 05:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # IFS=: 00:07:41.137 05:05:00 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@21 -- # val= 00:07:43.039 05:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # IFS=: 00:07:43.039 05:05:01 -- accel/accel.sh@20 -- # read -r var val 00:07:43.039 05:05:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.039 05:05:01 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:43.039 ************************************ 00:07:43.039 END TEST accel_crc32c_C2 00:07:43.039 ************************************ 00:07:43.039 05:05:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.039 00:07:43.039 real 0m5.804s 00:07:43.039 user 0m5.188s 00:07:43.039 sys 0m0.405s 00:07:43.039 05:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.039 05:05:01 -- common/autotest_common.sh@10 -- # set +x 00:07:43.039 05:05:02 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:43.039 05:05:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:43.039 05:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.039 05:05:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.039 ************************************ 00:07:43.039 START TEST accel_copy 00:07:43.039 ************************************ 00:07:43.039 05:05:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:43.039 05:05:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.039 05:05:02 -- accel/accel.sh@17 -- # local accel_module 00:07:43.039 05:05:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:43.039 05:05:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:43.039 05:05:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.039 05:05:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.039 05:05:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.039 05:05:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.039 05:05:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.039 05:05:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.039 05:05:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.039 05:05:02 -- accel/accel.sh@42 -- # jq -r . 00:07:43.039 [2024-07-26 05:05:02.122802] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:43.039 [2024-07-26 05:05:02.122951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:07:43.299 [2024-07-26 05:05:02.304307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.557 [2024-07-26 05:05:02.530369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.089 05:05:04 -- accel/accel.sh@18 -- # out=' 00:07:46.089 SPDK Configuration: 00:07:46.089 Core mask: 0x1 00:07:46.089 00:07:46.089 Accel Perf Configuration: 00:07:46.089 Workload Type: copy 00:07:46.089 Transfer size: 4096 bytes 00:07:46.089 Vector count 1 00:07:46.089 Module: software 00:07:46.089 Queue depth: 32 00:07:46.089 Allocate depth: 32 00:07:46.089 # threads/core: 1 00:07:46.089 Run time: 1 seconds 00:07:46.089 Verify: Yes 00:07:46.089 00:07:46.089 Running for 1 seconds... 00:07:46.089 00:07:46.089 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.089 ------------------------------------------------------------------------------------ 00:07:46.089 0,0 337824/s 1319 MiB/s 0 0 00:07:46.089 ==================================================================================== 00:07:46.089 Total 337824/s 1319 MiB/s 0 0' 00:07:46.089 05:05:04 -- accel/accel.sh@20 -- # IFS=: 00:07:46.089 05:05:04 -- accel/accel.sh@20 -- # read -r var val 00:07:46.089 05:05:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:46.089 05:05:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:46.089 05:05:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.089 05:05:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.089 05:05:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.089 05:05:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.089 05:05:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.089 05:05:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.089 05:05:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.089 05:05:04 -- accel/accel.sh@42 -- # jq -r . 00:07:46.089 [2024-07-26 05:05:04.865117] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:46.089 [2024-07-26 05:05:04.865282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:07:46.089 [2024-07-26 05:05:05.047742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.347 [2024-07-26 05:05:05.270660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=0x1 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=copy 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=software 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=32 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=32 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=1 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val=Yes 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:46.606 05:05:05 -- accel/accel.sh@21 -- # val= 00:07:46.606 05:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # IFS=: 00:07:46.606 05:05:05 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@21 -- # val= 00:07:48.510 05:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.510 05:05:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.510 05:05:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.510 05:05:07 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:48.511 05:05:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.511 00:07:48.511 real 0m5.464s 00:07:48.511 user 0m4.856s 00:07:48.511 sys 0m0.401s 00:07:48.511 05:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.511 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 ************************************ 00:07:48.511 END TEST accel_copy 00:07:48.511 ************************************ 00:07:48.511 05:05:07 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.511 05:05:07 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:48.511 05:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.511 05:05:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.511 ************************************ 00:07:48.511 START TEST accel_fill 00:07:48.511 ************************************ 00:07:48.511 05:05:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.511 05:05:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.511 05:05:07 -- accel/accel.sh@17 -- # local accel_module 00:07:48.511 05:05:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.511 05:05:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:48.511 05:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.511 05:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.511 05:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.511 05:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.511 05:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.511 05:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.511 05:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.511 05:05:07 -- accel/accel.sh@42 -- # jq -r . 00:07:48.769 [2024-07-26 05:05:07.645558] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:48.769 [2024-07-26 05:05:07.645889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:07:48.769 [2024-07-26 05:05:07.826803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.028 [2024-07-26 05:05:08.077596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.562 05:05:10 -- accel/accel.sh@18 -- # out=' 00:07:51.562 SPDK Configuration: 00:07:51.562 Core mask: 0x1 00:07:51.562 00:07:51.562 Accel Perf Configuration: 00:07:51.562 Workload Type: fill 00:07:51.562 Fill pattern: 0x80 00:07:51.562 Transfer size: 4096 bytes 00:07:51.562 Vector count 1 00:07:51.562 Module: software 00:07:51.562 Queue depth: 64 00:07:51.562 Allocate depth: 64 00:07:51.562 # threads/core: 1 00:07:51.562 Run time: 1 seconds 00:07:51.562 Verify: Yes 00:07:51.562 00:07:51.562 Running for 1 seconds... 00:07:51.562 00:07:51.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.562 ------------------------------------------------------------------------------------ 00:07:51.562 0,0 513856/s 2007 MiB/s 0 0 00:07:51.562 ==================================================================================== 00:07:51.562 Total 513856/s 2007 MiB/s 0 0' 00:07:51.562 05:05:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.562 05:05:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:51.562 05:05:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.562 05:05:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:51.562 05:05:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.562 05:05:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.562 05:05:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.562 05:05:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.562 05:05:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.562 05:05:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.562 05:05:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.562 05:05:10 -- accel/accel.sh@42 -- # jq -r . 00:07:51.562 [2024-07-26 05:05:10.421280] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:51.562 [2024-07-26 05:05:10.421685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59530 ] 00:07:51.562 [2024-07-26 05:05:10.610154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.130 [2024-07-26 05:05:10.950263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val=0x1 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val=fill 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val=0x80 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.130 05:05:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.130 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.130 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val=software 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val=64 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val=64 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val=1 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val=Yes 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:52.131 05:05:11 -- accel/accel.sh@21 -- # val= 00:07:52.131 05:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # IFS=: 00:07:52.131 05:05:11 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 05:05:13 -- accel/accel.sh@21 -- # val= 00:07:54.663 05:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.663 05:05:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.663 ************************************ 00:07:54.663 END TEST accel_fill 00:07:54.663 ************************************ 00:07:54.663 05:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.663 05:05:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:54.663 05:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.663 00:07:54.663 real 0m5.746s 00:07:54.663 user 0m5.141s 00:07:54.663 sys 0m0.389s 00:07:54.663 05:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.663 05:05:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.663 05:05:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:54.663 05:05:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:54.663 05:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.663 05:05:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.663 ************************************ 00:07:54.663 START TEST accel_copy_crc32c 00:07:54.663 ************************************ 00:07:54.663 05:05:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:54.663 05:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.663 05:05:13 -- accel/accel.sh@17 -- # local accel_module 00:07:54.663 05:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:54.663 05:05:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:54.663 05:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.663 05:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.663 05:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.663 05:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.663 05:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.663 05:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.663 05:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.663 05:05:13 -- accel/accel.sh@42 -- # jq -r . 00:07:54.663 [2024-07-26 05:05:13.440831] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:54.663 [2024-07-26 05:05:13.440940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:07:54.663 [2024-07-26 05:05:13.602720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.922 [2024-07-26 05:05:13.840389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.455 05:05:16 -- accel/accel.sh@18 -- # out=' 00:07:57.455 SPDK Configuration: 00:07:57.455 Core mask: 0x1 00:07:57.455 00:07:57.455 Accel Perf Configuration: 00:07:57.455 Workload Type: copy_crc32c 00:07:57.455 CRC-32C seed: 0 00:07:57.455 Vector size: 4096 bytes 00:07:57.455 Transfer size: 4096 bytes 00:07:57.455 Vector count 1 00:07:57.455 Module: software 00:07:57.455 Queue depth: 32 00:07:57.455 Allocate depth: 32 00:07:57.455 # threads/core: 1 00:07:57.455 Run time: 1 seconds 00:07:57.455 Verify: Yes 00:07:57.455 00:07:57.455 Running for 1 seconds... 00:07:57.455 00:07:57.455 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:57.455 ------------------------------------------------------------------------------------ 00:07:57.455 0,0 270976/s 1058 MiB/s 0 0 00:07:57.455 ==================================================================================== 00:07:57.455 Total 270976/s 1058 MiB/s 0 0' 00:07:57.455 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.455 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.455 05:05:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:57.455 05:05:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:57.455 05:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.455 05:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.455 05:05:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.455 05:05:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.455 05:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.455 05:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.455 05:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.455 05:05:16 -- accel/accel.sh@42 -- # jq -r . 00:07:57.455 [2024-07-26 05:05:16.133315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:57.455 [2024-07-26 05:05:16.133457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:07:57.455 [2024-07-26 05:05:16.316373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.455 [2024-07-26 05:05:16.552289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val=0x1 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val=0 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.713 05:05:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.713 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.713 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val=software 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val=32 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val=32 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val=1 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val=Yes 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:07:57.714 05:05:16 -- accel/accel.sh@21 -- # val= 00:07:57.714 05:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # IFS=: 00:07:57.714 05:05:16 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@21 -- # val= 00:08:00.261 05:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # IFS=: 00:08:00.261 05:05:18 -- accel/accel.sh@20 -- # read -r var val 00:08:00.261 05:05:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:00.261 05:05:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:00.261 05:05:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.261 00:08:00.261 real 0m5.396s 00:08:00.261 user 0m4.811s 00:08:00.261 sys 0m0.376s 00:08:00.261 05:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.261 ************************************ 00:08:00.261 END TEST accel_copy_crc32c 00:08:00.261 ************************************ 00:08:00.261 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:08:00.261 05:05:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.261 05:05:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:00.261 05:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.261 05:05:18 -- common/autotest_common.sh@10 -- # set +x 00:08:00.261 ************************************ 00:08:00.261 START TEST accel_copy_crc32c_C2 00:08:00.261 ************************************ 00:08:00.261 05:05:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:00.261 05:05:18 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.261 05:05:18 -- accel/accel.sh@17 -- # local accel_module 00:08:00.261 05:05:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:00.261 05:05:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:00.261 05:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.261 05:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.261 05:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.261 05:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.261 05:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.261 05:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.261 05:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.261 05:05:18 -- accel/accel.sh@42 -- # jq -r . 00:08:00.261 [2024-07-26 05:05:18.881708] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:00.261 [2024-07-26 05:05:18.881837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59666 ] 00:08:00.261 [2024-07-26 05:05:19.043989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.261 [2024-07-26 05:05:19.273195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.793 05:05:21 -- accel/accel.sh@18 -- # out=' 00:08:02.793 SPDK Configuration: 00:08:02.793 Core mask: 0x1 00:08:02.793 00:08:02.793 Accel Perf Configuration: 00:08:02.793 Workload Type: copy_crc32c 00:08:02.793 CRC-32C seed: 0 00:08:02.793 Vector size: 4096 bytes 00:08:02.793 Transfer size: 8192 bytes 00:08:02.793 Vector count 2 00:08:02.793 Module: software 00:08:02.793 Queue depth: 32 00:08:02.793 Allocate depth: 32 00:08:02.793 # threads/core: 1 00:08:02.793 Run time: 1 seconds 00:08:02.793 Verify: Yes 00:08:02.793 00:08:02.793 Running for 1 seconds... 00:08:02.793 00:08:02.793 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.793 ------------------------------------------------------------------------------------ 00:08:02.793 0,0 192928/s 1507 MiB/s 0 0 00:08:02.793 ==================================================================================== 00:08:02.793 Total 192928/s 753 MiB/s 0 0' 00:08:02.793 05:05:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.793 05:05:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.793 05:05:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:02.793 05:05:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:02.793 05:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.793 05:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.793 05:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.793 05:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.793 05:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.793 05:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.793 05:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.793 05:05:21 -- accel/accel.sh@42 -- # jq -r . 00:08:02.793 [2024-07-26 05:05:21.643853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:02.793 [2024-07-26 05:05:21.644018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59692 ] 00:08:02.793 [2024-07-26 05:05:21.832889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.360 [2024-07-26 05:05:22.169233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=0x1 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=0 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=software 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=32 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=32 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=1 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val=Yes 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.360 05:05:22 -- accel/accel.sh@21 -- # val= 00:08:03.360 05:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.360 05:05:22 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@21 -- # val= 00:08:05.891 05:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.891 05:05:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.891 05:05:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.891 05:05:24 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:05.891 05:05:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.891 00:08:05.891 real 0m5.569s 00:08:05.891 user 0m4.974s 00:08:05.891 sys 0m0.380s 00:08:05.891 05:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.891 ************************************ 00:08:05.891 END TEST accel_copy_crc32c_C2 00:08:05.891 ************************************ 00:08:05.891 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.891 05:05:24 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:05.891 05:05:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:05.891 05:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.891 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.891 ************************************ 00:08:05.891 START TEST accel_dualcast 00:08:05.891 ************************************ 00:08:05.891 05:05:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:05.891 05:05:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.891 05:05:24 -- accel/accel.sh@17 -- # local accel_module 00:08:05.891 05:05:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:05.891 05:05:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:05.891 05:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.891 05:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.891 05:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.891 05:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.891 05:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.891 05:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.891 05:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.891 05:05:24 -- accel/accel.sh@42 -- # jq -r . 00:08:05.891 [2024-07-26 05:05:24.526443] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:05.891 [2024-07-26 05:05:24.526748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:08:05.891 [2024-07-26 05:05:24.706792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.891 [2024-07-26 05:05:24.941791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.424 05:05:27 -- accel/accel.sh@18 -- # out=' 00:08:08.424 SPDK Configuration: 00:08:08.424 Core mask: 0x1 00:08:08.424 00:08:08.424 Accel Perf Configuration: 00:08:08.424 Workload Type: dualcast 00:08:08.424 Transfer size: 4096 bytes 00:08:08.424 Vector count 1 00:08:08.424 Module: software 00:08:08.424 Queue depth: 32 00:08:08.424 Allocate depth: 32 00:08:08.424 # threads/core: 1 00:08:08.424 Run time: 1 seconds 00:08:08.424 Verify: Yes 00:08:08.424 00:08:08.424 Running for 1 seconds... 00:08:08.424 00:08:08.424 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:08.424 ------------------------------------------------------------------------------------ 00:08:08.424 0,0 381824/s 1491 MiB/s 0 0 00:08:08.424 ==================================================================================== 00:08:08.424 Total 381824/s 1491 MiB/s 0 0' 00:08:08.424 05:05:27 -- accel/accel.sh@20 -- # IFS=: 00:08:08.425 05:05:27 -- accel/accel.sh@20 -- # read -r var val 00:08:08.425 05:05:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:08.425 05:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.425 05:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.425 05:05:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:08.425 05:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.425 05:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.425 05:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.425 05:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.425 05:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.425 05:05:27 -- accel/accel.sh@42 -- # jq -r . 00:08:08.425 [2024-07-26 05:05:27.404783] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:08.425 [2024-07-26 05:05:27.404990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59780 ] 00:08:08.683 [2024-07-26 05:05:27.595664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.940 [2024-07-26 05:05:27.938016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=0x1 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=dualcast 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=software 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=32 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=32 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=1 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val=Yes 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.198 05:05:28 -- accel/accel.sh@21 -- # val= 00:08:09.198 05:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.198 05:05:28 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.101 05:05:30 -- accel/accel.sh@21 -- # val= 00:08:11.101 05:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # IFS=: 00:08:11.101 05:05:30 -- accel/accel.sh@20 -- # read -r var val 00:08:11.359 05:05:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:11.360 05:05:30 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:11.360 05:05:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.360 00:08:11.360 real 0m5.745s 00:08:11.360 user 0m5.130s 00:08:11.360 sys 0m0.396s 00:08:11.360 05:05:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.360 05:05:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.360 ************************************ 00:08:11.360 END TEST accel_dualcast 00:08:11.360 ************************************ 00:08:11.360 05:05:30 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:11.360 05:05:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:11.360 05:05:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.360 05:05:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.360 ************************************ 00:08:11.360 START TEST accel_compare 00:08:11.360 ************************************ 00:08:11.360 05:05:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:11.360 05:05:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:11.360 05:05:30 -- accel/accel.sh@17 -- # local accel_module 00:08:11.360 05:05:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:11.360 05:05:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:11.360 05:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:11.360 05:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:11.360 05:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.360 05:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.360 05:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:11.360 05:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:11.360 05:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:11.360 05:05:30 -- accel/accel.sh@42 -- # jq -r . 00:08:11.360 [2024-07-26 05:05:30.335767] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:11.360 [2024-07-26 05:05:30.336083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59828 ] 00:08:11.619 [2024-07-26 05:05:30.517053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.876 [2024-07-26 05:05:30.744681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.410 05:05:32 -- accel/accel.sh@18 -- # out=' 00:08:14.410 SPDK Configuration: 00:08:14.410 Core mask: 0x1 00:08:14.410 00:08:14.410 Accel Perf Configuration: 00:08:14.410 Workload Type: compare 00:08:14.410 Transfer size: 4096 bytes 00:08:14.410 Vector count 1 00:08:14.411 Module: software 00:08:14.411 Queue depth: 32 00:08:14.411 Allocate depth: 32 00:08:14.411 # threads/core: 1 00:08:14.411 Run time: 1 seconds 00:08:14.411 Verify: Yes 00:08:14.411 00:08:14.411 Running for 1 seconds... 00:08:14.411 00:08:14.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:14.411 ------------------------------------------------------------------------------------ 00:08:14.411 0,0 521120/s 2035 MiB/s 0 0 00:08:14.411 ==================================================================================== 00:08:14.411 Total 521120/s 2035 MiB/s 0 0' 00:08:14.411 05:05:32 -- accel/accel.sh@20 -- # IFS=: 00:08:14.411 05:05:32 -- accel/accel.sh@20 -- # read -r var val 00:08:14.411 05:05:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:14.411 05:05:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:14.411 05:05:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.411 05:05:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.411 05:05:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.411 05:05:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.411 05:05:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.411 05:05:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.411 05:05:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.411 05:05:32 -- accel/accel.sh@42 -- # jq -r . 00:08:14.411 [2024-07-26 05:05:33.024084] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:14.411 [2024-07-26 05:05:33.024246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:08:14.411 [2024-07-26 05:05:33.201651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.411 [2024-07-26 05:05:33.431622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=0x1 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=compare 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=software 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@23 -- # accel_module=software 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=32 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=32 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=1 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val=Yes 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:14.669 05:05:33 -- accel/accel.sh@21 -- # val= 00:08:14.669 05:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # IFS=: 00:08:14.669 05:05:33 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 05:05:35 -- accel/accel.sh@21 -- # val= 00:08:16.572 05:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # IFS=: 00:08:16.572 05:05:35 -- accel/accel.sh@20 -- # read -r var val 00:08:16.572 ************************************ 00:08:16.572 END TEST accel_compare 00:08:16.572 ************************************ 00:08:16.572 05:05:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.572 05:05:35 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:16.572 05:05:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.572 00:08:16.572 real 0m5.385s 00:08:16.572 user 0m4.791s 00:08:16.572 sys 0m0.387s 00:08:16.572 05:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.572 05:05:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.831 05:05:35 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:16.831 05:05:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:16.831 05:05:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.831 05:05:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.831 ************************************ 00:08:16.831 START TEST accel_xor 00:08:16.831 ************************************ 00:08:16.831 05:05:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:16.831 05:05:35 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.831 05:05:35 -- accel/accel.sh@17 -- # local accel_module 00:08:16.831 05:05:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:16.831 05:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.831 05:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.831 05:05:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:16.831 05:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.831 05:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.831 05:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.831 05:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.831 05:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.831 05:05:35 -- accel/accel.sh@42 -- # jq -r . 00:08:16.831 [2024-07-26 05:05:35.785131] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:16.831 [2024-07-26 05:05:35.785488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:08:17.090 [2024-07-26 05:05:35.965033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.090 [2024-07-26 05:05:36.191486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.784 05:05:38 -- accel/accel.sh@18 -- # out=' 00:08:19.784 SPDK Configuration: 00:08:19.784 Core mask: 0x1 00:08:19.784 00:08:19.784 Accel Perf Configuration: 00:08:19.784 Workload Type: xor 00:08:19.784 Source buffers: 2 00:08:19.784 Transfer size: 4096 bytes 00:08:19.784 Vector count 1 00:08:19.784 Module: software 00:08:19.784 Queue depth: 32 00:08:19.784 Allocate depth: 32 00:08:19.784 # threads/core: 1 00:08:19.784 Run time: 1 seconds 00:08:19.784 Verify: Yes 00:08:19.784 00:08:19.784 Running for 1 seconds... 00:08:19.784 00:08:19.784 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:19.784 ------------------------------------------------------------------------------------ 00:08:19.784 0,0 373600/s 1459 MiB/s 0 0 00:08:19.784 ==================================================================================== 00:08:19.784 Total 373600/s 1459 MiB/s 0 0' 00:08:19.784 05:05:38 -- accel/accel.sh@20 -- # IFS=: 00:08:19.784 05:05:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:19.784 05:05:38 -- accel/accel.sh@20 -- # read -r var val 00:08:19.784 05:05:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:19.784 05:05:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.784 05:05:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.784 05:05:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.784 05:05:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.784 05:05:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.784 05:05:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.784 05:05:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.784 05:05:38 -- accel/accel.sh@42 -- # jq -r . 00:08:19.784 [2024-07-26 05:05:38.480477] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:19.784 [2024-07-26 05:05:38.480625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:08:19.784 [2024-07-26 05:05:38.665757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.063 [2024-07-26 05:05:38.944537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=0x1 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=xor 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=2 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=software 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@23 -- # accel_module=software 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=32 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=32 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=1 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val=Yes 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:20.379 05:05:39 -- accel/accel.sh@21 -- # val= 00:08:20.379 05:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # IFS=: 00:08:20.379 05:05:39 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@21 -- # val= 00:08:22.305 05:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # IFS=: 00:08:22.305 05:05:41 -- accel/accel.sh@20 -- # read -r var val 00:08:22.305 05:05:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:22.305 05:05:41 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:22.305 05:05:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.305 00:08:22.305 real 0m5.459s 00:08:22.305 user 0m4.871s 00:08:22.305 sys 0m0.379s 00:08:22.305 05:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.305 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.305 ************************************ 00:08:22.305 END TEST accel_xor 00:08:22.305 ************************************ 00:08:22.305 05:05:41 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:22.305 05:05:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:22.305 05:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.305 05:05:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.305 ************************************ 00:08:22.305 START TEST accel_xor 00:08:22.305 ************************************ 00:08:22.305 05:05:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:22.305 05:05:41 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.305 05:05:41 -- accel/accel.sh@17 -- # local accel_module 00:08:22.305 05:05:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:22.305 05:05:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:22.305 05:05:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.305 05:05:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.305 05:05:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.305 05:05:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.305 05:05:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.305 05:05:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.305 05:05:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.305 05:05:41 -- accel/accel.sh@42 -- # jq -r . 00:08:22.305 [2024-07-26 05:05:41.304417] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:22.305 [2024-07-26 05:05:41.304574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59985 ] 00:08:22.564 [2024-07-26 05:05:41.488633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.823 [2024-07-26 05:05:41.720286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.358 05:05:43 -- accel/accel.sh@18 -- # out=' 00:08:25.358 SPDK Configuration: 00:08:25.358 Core mask: 0x1 00:08:25.358 00:08:25.358 Accel Perf Configuration: 00:08:25.358 Workload Type: xor 00:08:25.358 Source buffers: 3 00:08:25.358 Transfer size: 4096 bytes 00:08:25.358 Vector count 1 00:08:25.358 Module: software 00:08:25.358 Queue depth: 32 00:08:25.358 Allocate depth: 32 00:08:25.358 # threads/core: 1 00:08:25.358 Run time: 1 seconds 00:08:25.358 Verify: Yes 00:08:25.358 00:08:25.358 Running for 1 seconds... 00:08:25.358 00:08:25.358 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:25.358 ------------------------------------------------------------------------------------ 00:08:25.358 0,0 350848/s 1370 MiB/s 0 0 00:08:25.358 ==================================================================================== 00:08:25.358 Total 350848/s 1370 MiB/s 0 0' 00:08:25.358 05:05:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:25.358 05:05:43 -- accel/accel.sh@20 -- # IFS=: 00:08:25.358 05:05:43 -- accel/accel.sh@20 -- # read -r var val 00:08:25.358 05:05:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:25.358 05:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.358 05:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.358 05:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.358 05:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.358 05:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.358 05:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.358 05:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.358 05:05:43 -- accel/accel.sh@42 -- # jq -r . 00:08:25.358 [2024-07-26 05:05:44.009055] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:25.358 [2024-07-26 05:05:44.009220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60021 ] 00:08:25.358 [2024-07-26 05:05:44.187993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.358 [2024-07-26 05:05:44.414657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=0x1 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=xor 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=3 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=software 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@23 -- # accel_module=software 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=32 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.617 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.617 05:05:44 -- accel/accel.sh@21 -- # val=32 00:08:25.617 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.618 05:05:44 -- accel/accel.sh@21 -- # val=1 00:08:25.618 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.618 05:05:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:25.618 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.618 05:05:44 -- accel/accel.sh@21 -- # val=Yes 00:08:25.618 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.618 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.618 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:25.618 05:05:44 -- accel/accel.sh@21 -- # val= 00:08:25.618 05:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # IFS=: 00:08:25.618 05:05:44 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@21 -- # val= 00:08:27.522 05:05:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # IFS=: 00:08:27.522 05:05:46 -- accel/accel.sh@20 -- # read -r var val 00:08:27.522 05:05:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:27.522 05:05:46 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:27.781 05:05:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.781 00:08:27.781 real 0m5.389s 00:08:27.781 user 0m4.795s 00:08:27.781 sys 0m0.384s 00:08:27.781 05:05:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.781 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:08:27.781 ************************************ 00:08:27.781 END TEST accel_xor 00:08:27.781 ************************************ 00:08:27.781 05:05:46 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:27.781 05:05:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:27.781 05:05:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:27.781 05:05:46 -- common/autotest_common.sh@10 -- # set +x 00:08:27.781 ************************************ 00:08:27.781 START TEST accel_dif_verify 00:08:27.781 ************************************ 00:08:27.781 05:05:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:27.781 05:05:46 -- accel/accel.sh@16 -- # local accel_opc 00:08:27.781 05:05:46 -- accel/accel.sh@17 -- # local accel_module 00:08:27.781 05:05:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:27.781 05:05:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:27.781 05:05:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:27.781 05:05:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:27.781 05:05:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.781 05:05:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.781 05:05:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:27.781 05:05:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:27.782 05:05:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:27.782 05:05:46 -- accel/accel.sh@42 -- # jq -r . 00:08:27.782 [2024-07-26 05:05:46.754128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:27.782 [2024-07-26 05:05:46.754295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60073 ] 00:08:28.041 [2024-07-26 05:05:46.940478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.299 [2024-07-26 05:05:47.170448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.834 05:05:49 -- accel/accel.sh@18 -- # out=' 00:08:30.834 SPDK Configuration: 00:08:30.834 Core mask: 0x1 00:08:30.834 00:08:30.834 Accel Perf Configuration: 00:08:30.834 Workload Type: dif_verify 00:08:30.834 Vector size: 4096 bytes 00:08:30.834 Transfer size: 4096 bytes 00:08:30.834 Block size: 512 bytes 00:08:30.834 Metadata size: 8 bytes 00:08:30.834 Vector count 1 00:08:30.834 Module: software 00:08:30.834 Queue depth: 32 00:08:30.834 Allocate depth: 32 00:08:30.834 # threads/core: 1 00:08:30.834 Run time: 1 seconds 00:08:30.834 Verify: No 00:08:30.834 00:08:30.834 Running for 1 seconds... 00:08:30.834 00:08:30.834 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:30.834 ------------------------------------------------------------------------------------ 00:08:30.834 0,0 118016/s 468 MiB/s 0 0 00:08:30.834 ==================================================================================== 00:08:30.835 Total 118016/s 461 MiB/s 0 0' 00:08:30.835 05:05:49 -- accel/accel.sh@20 -- # IFS=: 00:08:30.835 05:05:49 -- accel/accel.sh@20 -- # read -r var val 00:08:30.835 05:05:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:30.835 05:05:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:30.835 05:05:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:30.835 05:05:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:30.835 05:05:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.835 05:05:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.835 05:05:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:30.835 05:05:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:30.835 05:05:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:30.835 05:05:49 -- accel/accel.sh@42 -- # jq -r . 00:08:30.835 [2024-07-26 05:05:49.468519] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:30.835 [2024-07-26 05:05:49.468671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:08:30.835 [2024-07-26 05:05:49.652396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.835 [2024-07-26 05:05:49.887089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=0x1 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=dif_verify 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=software 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@23 -- # accel_module=software 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=32 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=32 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=1 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val=No 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:31.094 05:05:50 -- accel/accel.sh@21 -- # val= 00:08:31.094 05:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # IFS=: 00:08:31.094 05:05:50 -- accel/accel.sh@20 -- # read -r var val 00:08:33.649 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@21 -- # val= 00:08:33.650 05:05:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # IFS=: 00:08:33.650 05:05:52 -- accel/accel.sh@20 -- # read -r var val 00:08:33.650 05:05:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:33.650 05:05:52 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:33.650 05:05:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.650 00:08:33.650 real 0m5.608s 00:08:33.650 user 0m4.996s 00:08:33.650 sys 0m0.404s 00:08:33.650 05:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.650 ************************************ 00:08:33.650 END TEST accel_dif_verify 00:08:33.650 ************************************ 00:08:33.650 05:05:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 05:05:52 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:33.650 05:05:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:33.650 05:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.650 05:05:52 -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 ************************************ 00:08:33.650 START TEST accel_dif_generate 00:08:33.650 ************************************ 00:08:33.650 05:05:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:33.650 05:05:52 -- accel/accel.sh@16 -- # local accel_opc 00:08:33.650 05:05:52 -- accel/accel.sh@17 -- # local accel_module 00:08:33.650 05:05:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:33.650 05:05:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:33.650 05:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.650 05:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.650 05:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.650 05:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.650 05:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.650 05:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.650 05:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.650 05:05:52 -- accel/accel.sh@42 -- # jq -r . 00:08:33.650 [2024-07-26 05:05:52.426091] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:33.650 [2024-07-26 05:05:52.426272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60151 ] 00:08:33.650 [2024-07-26 05:05:52.610849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.907 [2024-07-26 05:05:52.892880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.439 05:05:55 -- accel/accel.sh@18 -- # out=' 00:08:36.439 SPDK Configuration: 00:08:36.439 Core mask: 0x1 00:08:36.439 00:08:36.439 Accel Perf Configuration: 00:08:36.439 Workload Type: dif_generate 00:08:36.439 Vector size: 4096 bytes 00:08:36.439 Transfer size: 4096 bytes 00:08:36.439 Block size: 512 bytes 00:08:36.439 Metadata size: 8 bytes 00:08:36.439 Vector count 1 00:08:36.439 Module: software 00:08:36.439 Queue depth: 32 00:08:36.439 Allocate depth: 32 00:08:36.439 # threads/core: 1 00:08:36.439 Run time: 1 seconds 00:08:36.439 Verify: No 00:08:36.439 00:08:36.439 Running for 1 seconds... 00:08:36.439 00:08:36.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:36.439 ------------------------------------------------------------------------------------ 00:08:36.439 0,0 139040/s 551 MiB/s 0 0 00:08:36.439 ==================================================================================== 00:08:36.439 Total 139040/s 543 MiB/s 0 0' 00:08:36.439 05:05:55 -- accel/accel.sh@20 -- # IFS=: 00:08:36.439 05:05:55 -- accel/accel.sh@20 -- # read -r var val 00:08:36.439 05:05:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:36.439 05:05:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:36.439 05:05:55 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.439 05:05:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:36.439 05:05:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.439 05:05:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.439 05:05:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:36.439 05:05:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:36.439 05:05:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:36.439 05:05:55 -- accel/accel.sh@42 -- # jq -r . 00:08:36.439 [2024-07-26 05:05:55.380869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:36.439 [2024-07-26 05:05:55.381055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60188 ] 00:08:36.697 [2024-07-26 05:05:55.565388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.954 [2024-07-26 05:05:55.852303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val=0x1 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val=dif_generate 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.211 05:05:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:37.211 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.211 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val=software 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@23 -- # accel_module=software 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val=32 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val=32 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val=1 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val=No 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:37.212 05:05:56 -- accel/accel.sh@21 -- # val= 00:08:37.212 05:05:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # IFS=: 00:08:37.212 05:05:56 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@21 -- # val= 00:08:39.737 05:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # IFS=: 00:08:39.737 05:05:58 -- accel/accel.sh@20 -- # read -r var val 00:08:39.737 05:05:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:39.737 05:05:58 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:39.737 05:05:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.737 00:08:39.737 real 0m6.019s 00:08:39.737 user 0m5.273s 00:08:39.737 sys 0m0.533s 00:08:39.737 05:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.737 05:05:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.737 ************************************ 00:08:39.737 END TEST accel_dif_generate 00:08:39.737 ************************************ 00:08:39.737 05:05:58 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:39.737 05:05:58 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:39.737 05:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.737 05:05:58 -- common/autotest_common.sh@10 -- # set +x 00:08:39.737 ************************************ 00:08:39.737 START TEST accel_dif_generate_copy 00:08:39.737 ************************************ 00:08:39.737 05:05:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:39.737 05:05:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:39.737 05:05:58 -- accel/accel.sh@17 -- # local accel_module 00:08:39.737 05:05:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:39.737 05:05:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:39.737 05:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.737 05:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.737 05:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.737 05:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.737 05:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.737 05:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.737 05:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.737 05:05:58 -- accel/accel.sh@42 -- # jq -r . 00:08:39.737 [2024-07-26 05:05:58.506981] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:39.737 [2024-07-26 05:05:58.507133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60239 ] 00:08:39.737 [2024-07-26 05:05:58.696468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.996 [2024-07-26 05:05:59.040068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.526 05:06:01 -- accel/accel.sh@18 -- # out=' 00:08:42.527 SPDK Configuration: 00:08:42.527 Core mask: 0x1 00:08:42.527 00:08:42.527 Accel Perf Configuration: 00:08:42.527 Workload Type: dif_generate_copy 00:08:42.527 Vector size: 4096 bytes 00:08:42.527 Transfer size: 4096 bytes 00:08:42.527 Vector count 1 00:08:42.527 Module: software 00:08:42.527 Queue depth: 32 00:08:42.527 Allocate depth: 32 00:08:42.527 # threads/core: 1 00:08:42.527 Run time: 1 seconds 00:08:42.527 Verify: No 00:08:42.527 00:08:42.527 Running for 1 seconds... 00:08:42.527 00:08:42.527 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:42.527 ------------------------------------------------------------------------------------ 00:08:42.527 0,0 93760/s 371 MiB/s 0 0 00:08:42.527 ==================================================================================== 00:08:42.527 Total 93760/s 366 MiB/s 0 0' 00:08:42.527 05:06:01 -- accel/accel.sh@20 -- # IFS=: 00:08:42.527 05:06:01 -- accel/accel.sh@20 -- # read -r var val 00:08:42.527 05:06:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:42.527 05:06:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:42.527 05:06:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.527 05:06:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:42.527 05:06:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.527 05:06:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.527 05:06:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:42.527 05:06:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:42.527 05:06:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:42.527 05:06:01 -- accel/accel.sh@42 -- # jq -r . 00:08:42.527 [2024-07-26 05:06:01.490834] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:42.527 [2024-07-26 05:06:01.490984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60272 ] 00:08:42.785 [2024-07-26 05:06:01.673362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.049 [2024-07-26 05:06:01.899903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val=0x1 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val=software 00:08:43.049 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.049 05:06:02 -- accel/accel.sh@23 -- # accel_module=software 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.049 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.049 05:06:02 -- accel/accel.sh@21 -- # val=32 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val=32 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val=1 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val=No 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:43.318 05:06:02 -- accel/accel.sh@21 -- # val= 00:08:43.318 05:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # IFS=: 00:08:43.318 05:06:02 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@21 -- # val= 00:08:45.220 05:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # IFS=: 00:08:45.220 05:06:04 -- accel/accel.sh@20 -- # read -r var val 00:08:45.220 05:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:45.220 05:06:04 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:45.220 05:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.220 00:08:45.220 real 0m5.708s 00:08:45.220 user 0m5.093s 00:08:45.220 sys 0m0.403s 00:08:45.220 ************************************ 00:08:45.220 END TEST accel_dif_generate_copy 00:08:45.220 ************************************ 00:08:45.220 05:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.220 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.220 05:06:04 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:45.220 05:06:04 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:45.220 05:06:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:45.220 05:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.220 05:06:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.220 ************************************ 00:08:45.220 START TEST accel_comp 00:08:45.220 ************************************ 00:08:45.220 05:06:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:45.220 05:06:04 -- accel/accel.sh@16 -- # local accel_opc 00:08:45.220 05:06:04 -- accel/accel.sh@17 -- # local accel_module 00:08:45.220 05:06:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:45.220 05:06:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:45.220 05:06:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:45.220 05:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:45.220 05:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.220 05:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.220 05:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:45.220 05:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:45.221 05:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:45.221 05:06:04 -- accel/accel.sh@42 -- # jq -r . 00:08:45.221 [2024-07-26 05:06:04.269698] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:45.221 [2024-07-26 05:06:04.269845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60324 ] 00:08:45.479 [2024-07-26 05:06:04.452620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.737 [2024-07-26 05:06:04.681393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.268 05:06:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:48.268 00:08:48.268 SPDK Configuration: 00:08:48.268 Core mask: 0x1 00:08:48.268 00:08:48.268 Accel Perf Configuration: 00:08:48.268 Workload Type: compress 00:08:48.268 Transfer size: 4096 bytes 00:08:48.268 Vector count 1 00:08:48.268 Module: software 00:08:48.268 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.268 Queue depth: 32 00:08:48.268 Allocate depth: 32 00:08:48.268 # threads/core: 1 00:08:48.268 Run time: 1 seconds 00:08:48.268 Verify: No 00:08:48.268 00:08:48.268 Running for 1 seconds... 00:08:48.268 00:08:48.268 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:48.268 ------------------------------------------------------------------------------------ 00:08:48.268 0,0 57696/s 240 MiB/s 0 0 00:08:48.268 ==================================================================================== 00:08:48.268 Total 57696/s 225 MiB/s 0 0' 00:08:48.268 05:06:06 -- accel/accel.sh@20 -- # IFS=: 00:08:48.268 05:06:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.268 05:06:06 -- accel/accel.sh@20 -- # read -r var val 00:08:48.268 05:06:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.268 05:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.268 05:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:48.268 05:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.268 05:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.268 05:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:48.268 05:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:48.268 05:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:48.268 05:06:06 -- accel/accel.sh@42 -- # jq -r . 00:08:48.268 [2024-07-26 05:06:06.992633] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:48.268 [2024-07-26 05:06:06.992782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60356 ] 00:08:48.268 [2024-07-26 05:06:07.173633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.526 [2024-07-26 05:06:07.403491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val=0x1 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val=compress 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val=software 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.784 05:06:07 -- accel/accel.sh@23 -- # accel_module=software 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.784 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.784 05:06:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.784 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val=32 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val=32 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val=1 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val=No 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:48.785 05:06:07 -- accel/accel.sh@21 -- # val= 00:08:48.785 05:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # IFS=: 00:08:48.785 05:06:07 -- accel/accel.sh@20 -- # read -r var val 00:08:50.685 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.685 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.685 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.685 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.685 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.685 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.685 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.685 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.944 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.944 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.944 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.944 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.944 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.944 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.944 05:06:09 -- accel/accel.sh@21 -- # val= 00:08:50.944 05:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # IFS=: 00:08:50.944 05:06:09 -- accel/accel.sh@20 -- # read -r var val 00:08:50.944 05:06:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:50.944 05:06:09 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:50.944 05:06:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:50.944 00:08:50.944 real 0m5.600s 00:08:50.944 user 0m5.005s 00:08:50.944 sys 0m0.383s 00:08:50.944 05:06:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.944 ************************************ 00:08:50.944 END TEST accel_comp 00:08:50.944 ************************************ 00:08:50.944 05:06:09 -- common/autotest_common.sh@10 -- # set +x 00:08:50.944 05:06:09 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.944 05:06:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:50.944 05:06:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.944 05:06:09 -- common/autotest_common.sh@10 -- # set +x 00:08:50.944 ************************************ 00:08:50.944 START TEST accel_decomp 00:08:50.944 ************************************ 00:08:50.944 05:06:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.944 05:06:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:50.944 05:06:09 -- accel/accel.sh@17 -- # local accel_module 00:08:50.944 05:06:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.944 05:06:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:50.944 05:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:50.944 05:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:50.944 05:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.944 05:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.944 05:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:50.944 05:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:50.944 05:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:08:50.944 05:06:09 -- accel/accel.sh@42 -- # jq -r . 00:08:50.944 [2024-07-26 05:06:09.931888] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:50.944 [2024-07-26 05:06:09.932234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:08:51.203 [2024-07-26 05:06:10.115479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.460 [2024-07-26 05:06:10.352039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.991 05:06:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:53.991 00:08:53.991 SPDK Configuration: 00:08:53.991 Core mask: 0x1 00:08:53.991 00:08:53.991 Accel Perf Configuration: 00:08:53.991 Workload Type: decompress 00:08:53.991 Transfer size: 4096 bytes 00:08:53.991 Vector count 1 00:08:53.991 Module: software 00:08:53.991 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:53.991 Queue depth: 32 00:08:53.991 Allocate depth: 32 00:08:53.991 # threads/core: 1 00:08:53.991 Run time: 1 seconds 00:08:53.991 Verify: Yes 00:08:53.991 00:08:53.991 Running for 1 seconds... 00:08:53.991 00:08:53.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:53.991 ------------------------------------------------------------------------------------ 00:08:53.991 0,0 63360/s 116 MiB/s 0 0 00:08:53.991 ==================================================================================== 00:08:53.991 Total 63360/s 247 MiB/s 0 0' 00:08:53.991 05:06:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:53.991 05:06:12 -- accel/accel.sh@20 -- # IFS=: 00:08:53.991 05:06:12 -- accel/accel.sh@20 -- # read -r var val 00:08:53.991 05:06:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:53.991 05:06:12 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.991 05:06:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:53.991 05:06:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.991 05:06:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.991 05:06:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:53.991 05:06:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:53.991 05:06:12 -- accel/accel.sh@41 -- # local IFS=, 00:08:53.991 05:06:12 -- accel/accel.sh@42 -- # jq -r . 00:08:53.991 [2024-07-26 05:06:12.673461] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:53.991 [2024-07-26 05:06:12.673610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:08:53.991 [2024-07-26 05:06:12.855031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.991 [2024-07-26 05:06:13.099104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=0x1 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=decompress 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=software 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@23 -- # accel_module=software 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=32 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=32 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=1 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val=Yes 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:54.563 05:06:13 -- accel/accel.sh@21 -- # val= 00:08:54.563 05:06:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # IFS=: 00:08:54.563 05:06:13 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@21 -- # val= 00:08:56.471 05:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # IFS=: 00:08:56.471 05:06:15 -- accel/accel.sh@20 -- # read -r var val 00:08:56.471 05:06:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:56.471 05:06:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:56.471 05:06:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.471 00:08:56.471 real 0m5.661s 00:08:56.471 user 0m5.063s 00:08:56.471 sys 0m0.389s 00:08:56.471 05:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.471 ************************************ 00:08:56.471 END TEST accel_decomp 00:08:56.471 ************************************ 00:08:56.471 05:06:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.729 05:06:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:56.729 05:06:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:56.729 05:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:56.729 05:06:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.729 ************************************ 00:08:56.729 START TEST accel_decmop_full 00:08:56.729 ************************************ 00:08:56.729 05:06:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:56.729 05:06:15 -- accel/accel.sh@16 -- # local accel_opc 00:08:56.729 05:06:15 -- accel/accel.sh@17 -- # local accel_module 00:08:56.729 05:06:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:56.729 05:06:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:56.729 05:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:08:56.729 05:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:56.729 05:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.729 05:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.729 05:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:56.729 05:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:56.729 05:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:08:56.729 05:06:15 -- accel/accel.sh@42 -- # jq -r . 00:08:56.729 [2024-07-26 05:06:15.664583] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:56.729 [2024-07-26 05:06:15.664912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:08:56.988 [2024-07-26 05:06:15.851528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.247 [2024-07-26 05:06:16.129507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.779 05:06:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:59.779 00:08:59.779 SPDK Configuration: 00:08:59.779 Core mask: 0x1 00:08:59.779 00:08:59.779 Accel Perf Configuration: 00:08:59.779 Workload Type: decompress 00:08:59.779 Transfer size: 111250 bytes 00:08:59.779 Vector count 1 00:08:59.779 Module: software 00:08:59.779 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:59.779 Queue depth: 32 00:08:59.779 Allocate depth: 32 00:08:59.779 # threads/core: 1 00:08:59.779 Run time: 1 seconds 00:08:59.779 Verify: Yes 00:08:59.779 00:08:59.779 Running for 1 seconds... 00:08:59.779 00:08:59.779 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:59.779 ------------------------------------------------------------------------------------ 00:08:59.779 0,0 4288/s 177 MiB/s 0 0 00:08:59.779 ==================================================================================== 00:08:59.779 Total 4288/s 454 MiB/s 0 0' 00:08:59.779 05:06:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.779 05:06:18 -- accel/accel.sh@20 -- # IFS=: 00:08:59.779 05:06:18 -- accel/accel.sh@20 -- # read -r var val 00:08:59.779 05:06:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.779 05:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:08:59.779 05:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:59.779 05:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.779 05:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.779 05:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:59.779 05:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:59.779 05:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:08:59.779 05:06:18 -- accel/accel.sh@42 -- # jq -r . 00:08:59.779 [2024-07-26 05:06:18.513698] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.779 [2024-07-26 05:06:18.514029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ] 00:08:59.779 [2024-07-26 05:06:18.697191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.038 [2024-07-26 05:06:18.926483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=0x1 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=decompress 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=software 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=32 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=32 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=1 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val=Yes 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:00.297 05:06:19 -- accel/accel.sh@21 -- # val= 00:09:00.297 05:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # IFS=: 00:09:00.297 05:06:19 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@21 -- # val= 00:09:02.199 05:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # IFS=: 00:09:02.199 05:06:21 -- accel/accel.sh@20 -- # read -r var val 00:09:02.199 05:06:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:02.199 05:06:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:02.199 05:06:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.199 00:09:02.199 real 0m5.631s 00:09:02.199 user 0m5.005s 00:09:02.199 sys 0m0.411s 00:09:02.199 05:06:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.199 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.199 ************************************ 00:09:02.199 END TEST accel_decmop_full 00:09:02.199 ************************************ 00:09:02.200 05:06:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:02.200 05:06:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:02.200 05:06:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.200 05:06:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.200 ************************************ 00:09:02.200 START TEST accel_decomp_mcore 00:09:02.200 ************************************ 00:09:02.200 05:06:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:02.200 05:06:21 -- accel/accel.sh@16 -- # local accel_opc 00:09:02.200 05:06:21 -- accel/accel.sh@17 -- # local accel_module 00:09:02.200 05:06:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:02.200 05:06:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:02.200 05:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:09:02.200 05:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.200 05:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.200 05:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.200 05:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.200 05:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.200 05:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.200 05:06:21 -- accel/accel.sh@42 -- # jq -r . 00:09:02.458 [2024-07-26 05:06:21.352508] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:02.458 [2024-07-26 05:06:21.352657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60575 ] 00:09:02.458 [2024-07-26 05:06:21.536171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.716 [2024-07-26 05:06:21.785649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.716 [2024-07-26 05:06:21.785833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.716 [2024-07-26 05:06:21.785988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.716 [2024-07-26 05:06:21.786016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.240 05:06:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:05.240 00:09:05.240 SPDK Configuration: 00:09:05.240 Core mask: 0xf 00:09:05.240 00:09:05.240 Accel Perf Configuration: 00:09:05.240 Workload Type: decompress 00:09:05.240 Transfer size: 4096 bytes 00:09:05.240 Vector count 1 00:09:05.240 Module: software 00:09:05.240 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:05.240 Queue depth: 32 00:09:05.240 Allocate depth: 32 00:09:05.240 # threads/core: 1 00:09:05.240 Run time: 1 seconds 00:09:05.240 Verify: Yes 00:09:05.240 00:09:05.240 Running for 1 seconds... 00:09:05.240 00:09:05.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:05.240 ------------------------------------------------------------------------------------ 00:09:05.240 0,0 55392/s 102 MiB/s 0 0 00:09:05.241 3,0 61888/s 114 MiB/s 0 0 00:09:05.241 2,0 61440/s 113 MiB/s 0 0 00:09:05.241 1,0 61408/s 113 MiB/s 0 0 00:09:05.241 ==================================================================================== 00:09:05.241 Total 240128/s 938 MiB/s 0 0' 00:09:05.241 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.241 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.241 05:06:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.241 05:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:05.241 05:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:09:05.241 05:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:05.241 05:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.241 05:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.241 05:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:05.241 05:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:05.241 05:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:09:05.241 05:06:24 -- accel/accel.sh@42 -- # jq -r . 00:09:05.241 [2024-07-26 05:06:24.162823] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:05.241 [2024-07-26 05:06:24.162979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60604 ] 00:09:05.241 [2024-07-26 05:06:24.345992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.499 [2024-07-26 05:06:24.584089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.499 [2024-07-26 05:06:24.584323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.499 [2024-07-26 05:06:24.584425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.499 [2024-07-26 05:06:24.584385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=0xf 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=decompress 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=software 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@23 -- # accel_module=software 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=32 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=32 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=1 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val=Yes 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:05.757 05:06:24 -- accel/accel.sh@21 -- # val= 00:09:05.757 05:06:24 -- accel/accel.sh@22 -- # case "$var" in 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # IFS=: 00:09:05.757 05:06:24 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@21 -- # val= 00:09:08.290 05:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # IFS=: 00:09:08.290 05:06:26 -- accel/accel.sh@20 -- # read -r var val 00:09:08.290 05:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:08.290 05:06:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:08.290 05:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:08.290 00:09:08.290 real 0m5.614s 00:09:08.290 user 0m16.052s 00:09:08.290 sys 0m0.467s 00:09:08.290 05:06:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.290 05:06:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.290 ************************************ 00:09:08.290 END TEST accel_decomp_mcore 00:09:08.290 ************************************ 00:09:08.290 05:06:26 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:08.290 05:06:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:08.290 05:06:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.290 05:06:26 -- common/autotest_common.sh@10 -- # set +x 00:09:08.290 ************************************ 00:09:08.290 START TEST accel_decomp_full_mcore 00:09:08.290 ************************************ 00:09:08.290 05:06:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:08.290 05:06:26 -- accel/accel.sh@16 -- # local accel_opc 00:09:08.290 05:06:26 -- accel/accel.sh@17 -- # local accel_module 00:09:08.290 05:06:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:08.290 05:06:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:08.290 05:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:09:08.290 05:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:08.290 05:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:08.290 05:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:08.290 05:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:08.290 05:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:08.290 05:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:09:08.290 05:06:26 -- accel/accel.sh@42 -- # jq -r . 00:09:08.290 [2024-07-26 05:06:27.012571] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:08.290 [2024-07-26 05:06:27.012678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60659 ] 00:09:08.290 [2024-07-26 05:06:27.174561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.549 [2024-07-26 05:06:27.421517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.549 [2024-07-26 05:06:27.421687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.549 [2024-07-26 05:06:27.421872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.549 [2024-07-26 05:06:27.421896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.083 05:06:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:11.083 00:09:11.083 SPDK Configuration: 00:09:11.083 Core mask: 0xf 00:09:11.083 00:09:11.083 Accel Perf Configuration: 00:09:11.083 Workload Type: decompress 00:09:11.083 Transfer size: 111250 bytes 00:09:11.083 Vector count 1 00:09:11.083 Module: software 00:09:11.084 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:11.084 Queue depth: 32 00:09:11.084 Allocate depth: 32 00:09:11.084 # threads/core: 1 00:09:11.084 Run time: 1 seconds 00:09:11.084 Verify: Yes 00:09:11.084 00:09:11.084 Running for 1 seconds... 00:09:11.084 00:09:11.084 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:11.084 ------------------------------------------------------------------------------------ 00:09:11.084 0,0 3840/s 158 MiB/s 0 0 00:09:11.084 3,0 4448/s 183 MiB/s 0 0 00:09:11.084 2,0 4416/s 182 MiB/s 0 0 00:09:11.084 1,0 4448/s 183 MiB/s 0 0 00:09:11.084 ==================================================================================== 00:09:11.084 Total 17152/s 1819 MiB/s 0 0' 00:09:11.084 05:06:29 -- accel/accel.sh@20 -- # IFS=: 00:09:11.084 05:06:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.084 05:06:29 -- accel/accel.sh@20 -- # read -r var val 00:09:11.084 05:06:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:11.084 05:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:11.084 05:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:11.084 05:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:11.084 05:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:11.084 05:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:11.084 05:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:11.084 05:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:11.084 05:06:29 -- accel/accel.sh@42 -- # jq -r . 00:09:11.084 [2024-07-26 05:06:29.860128] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:11.084 [2024-07-26 05:06:29.860278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:09:11.084 [2024-07-26 05:06:30.023736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.343 [2024-07-26 05:06:30.271874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.343 [2024-07-26 05:06:30.272026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.343 [2024-07-26 05:06:30.272223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.343 [2024-07-26 05:06:30.272324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=0xf 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=decompress 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=software 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@23 -- # accel_module=software 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=32 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=32 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=1 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val=Yes 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.602 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.602 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.602 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:11.603 05:06:30 -- accel/accel.sh@21 -- # val= 00:09:11.603 05:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:09:11.603 05:06:30 -- accel/accel.sh@20 -- # IFS=: 00:09:11.603 05:06:30 -- accel/accel.sh@20 -- # read -r var val 00:09:13.509 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.509 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.509 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.509 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.509 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.509 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.509 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.509 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.509 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.509 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.769 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.769 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.769 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.769 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.769 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.769 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.769 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.769 05:06:32 -- accel/accel.sh@21 -- # val= 00:09:13.769 05:06:32 -- accel/accel.sh@22 -- # case "$var" in 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # IFS=: 00:09:13.769 05:06:32 -- accel/accel.sh@20 -- # read -r var val 00:09:13.769 05:06:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:13.769 05:06:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:13.769 05:06:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.769 00:09:13.769 real 0m5.669s 00:09:13.769 user 0m16.537s 00:09:13.769 sys 0m0.405s 00:09:13.769 05:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.769 ************************************ 00:09:13.769 05:06:32 -- common/autotest_common.sh@10 -- # set +x 00:09:13.769 END TEST accel_decomp_full_mcore 00:09:13.769 ************************************ 00:09:13.769 05:06:32 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:13.769 05:06:32 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:13.769 05:06:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.769 05:06:32 -- common/autotest_common.sh@10 -- # set +x 00:09:13.769 ************************************ 00:09:13.769 START TEST accel_decomp_mthread 00:09:13.769 ************************************ 00:09:13.769 05:06:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:13.769 05:06:32 -- accel/accel.sh@16 -- # local accel_opc 00:09:13.769 05:06:32 -- accel/accel.sh@17 -- # local accel_module 00:09:13.769 05:06:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:13.769 05:06:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:13.769 05:06:32 -- accel/accel.sh@12 -- # build_accel_config 00:09:13.769 05:06:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:13.769 05:06:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.769 05:06:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.769 05:06:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:13.769 05:06:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:13.769 05:06:32 -- accel/accel.sh@41 -- # local IFS=, 00:09:13.769 05:06:32 -- accel/accel.sh@42 -- # jq -r . 00:09:13.769 [2024-07-26 05:06:32.757067] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:13.769 [2024-07-26 05:06:32.757243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60749 ] 00:09:14.028 [2024-07-26 05:06:32.938428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.287 [2024-07-26 05:06:33.176504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.821 05:06:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:16.821 00:09:16.821 SPDK Configuration: 00:09:16.821 Core mask: 0x1 00:09:16.821 00:09:16.821 Accel Perf Configuration: 00:09:16.821 Workload Type: decompress 00:09:16.821 Transfer size: 4096 bytes 00:09:16.821 Vector count 1 00:09:16.821 Module: software 00:09:16.821 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:16.821 Queue depth: 32 00:09:16.821 Allocate depth: 32 00:09:16.821 # threads/core: 2 00:09:16.821 Run time: 1 seconds 00:09:16.821 Verify: Yes 00:09:16.821 00:09:16.821 Running for 1 seconds... 00:09:16.821 00:09:16.821 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:16.821 ------------------------------------------------------------------------------------ 00:09:16.821 0,1 33536/s 61 MiB/s 0 0 00:09:16.821 0,0 33440/s 61 MiB/s 0 0 00:09:16.821 ==================================================================================== 00:09:16.821 Total 66976/s 261 MiB/s 0 0' 00:09:16.821 05:06:35 -- accel/accel.sh@20 -- # IFS=: 00:09:16.821 05:06:35 -- accel/accel.sh@20 -- # read -r var val 00:09:16.821 05:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:16.821 05:06:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:16.821 05:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.821 05:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:16.821 05:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.821 05:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.821 05:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:16.821 05:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:16.821 05:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:09:16.821 05:06:35 -- accel/accel.sh@42 -- # jq -r . 00:09:16.821 [2024-07-26 05:06:35.505057] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:16.821 [2024-07-26 05:06:35.505471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:09:16.821 [2024-07-26 05:06:35.688565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.821 [2024-07-26 05:06:35.928366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.388 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.388 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.388 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.388 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.388 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.388 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.388 05:06:36 -- accel/accel.sh@21 -- # val=0x1 00:09:17.388 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.388 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.388 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.388 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=decompress 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=software 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@23 -- # accel_module=software 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=32 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=32 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=2 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val=Yes 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:17.389 05:06:36 -- accel/accel.sh@21 -- # val= 00:09:17.389 05:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # IFS=: 00:09:17.389 05:06:36 -- accel/accel.sh@20 -- # read -r var val 00:09:19.314 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.314 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.314 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.314 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.314 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.314 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.314 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.314 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.315 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.315 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.315 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.315 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@21 -- # val= 00:09:19.315 05:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # IFS=: 00:09:19.315 05:06:38 -- accel/accel.sh@20 -- # read -r var val 00:09:19.315 05:06:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:19.315 05:06:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:19.315 05:06:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:19.315 00:09:19.315 real 0m5.559s 00:09:19.315 user 0m4.957s 00:09:19.315 sys 0m0.392s 00:09:19.315 05:06:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.315 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 ************************************ 00:09:19.315 END TEST accel_decomp_mthread 00:09:19.315 ************************************ 00:09:19.315 05:06:38 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:19.315 05:06:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:19.315 05:06:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.315 05:06:38 -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 ************************************ 00:09:19.315 START TEST accel_deomp_full_mthread 00:09:19.315 ************************************ 00:09:19.315 05:06:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:19.315 05:06:38 -- accel/accel.sh@16 -- # local accel_opc 00:09:19.315 05:06:38 -- accel/accel.sh@17 -- # local accel_module 00:09:19.315 05:06:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:19.315 05:06:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:19.315 05:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:09:19.315 05:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:19.315 05:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.315 05:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.315 05:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:19.315 05:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:19.315 05:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:09:19.315 05:06:38 -- accel/accel.sh@42 -- # jq -r . 00:09:19.315 [2024-07-26 05:06:38.365617] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:19.315 [2024-07-26 05:06:38.365767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:09:19.574 [2024-07-26 05:06:38.551753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.833 [2024-07-26 05:06:38.838039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.368 05:06:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:09:22.368 00:09:22.368 SPDK Configuration: 00:09:22.368 Core mask: 0x1 00:09:22.368 00:09:22.368 Accel Perf Configuration: 00:09:22.368 Workload Type: decompress 00:09:22.368 Transfer size: 111250 bytes 00:09:22.368 Vector count 1 00:09:22.368 Module: software 00:09:22.368 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:22.368 Queue depth: 32 00:09:22.368 Allocate depth: 32 00:09:22.368 # threads/core: 2 00:09:22.368 Run time: 1 seconds 00:09:22.368 Verify: Yes 00:09:22.368 00:09:22.368 Running for 1 seconds... 00:09:22.368 00:09:22.368 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:22.368 ------------------------------------------------------------------------------------ 00:09:22.368 0,1 2080/s 85 MiB/s 0 0 00:09:22.368 0,0 2048/s 84 MiB/s 0 0 00:09:22.368 ==================================================================================== 00:09:22.368 Total 4128/s 437 MiB/s 0 0' 00:09:22.368 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.368 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.368 05:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.368 05:06:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:22.368 05:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.368 05:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.368 05:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.368 05:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.368 05:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.368 05:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.368 05:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.368 05:06:41 -- accel/accel.sh@42 -- # jq -r . 00:09:22.368 [2024-07-26 05:06:41.241398] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:22.368 [2024-07-26 05:06:41.241726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:09:22.368 [2024-07-26 05:06:41.424144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.628 [2024-07-26 05:06:41.668250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=0x1 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=decompress 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=software 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@23 -- # accel_module=software 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=32 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=32 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=2 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val=Yes 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:22.887 05:06:41 -- accel/accel.sh@21 -- # val= 00:09:22.887 05:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # IFS=: 00:09:22.887 05:06:41 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@21 -- # val= 00:09:25.421 05:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # IFS=: 00:09:25.421 05:06:44 -- accel/accel.sh@20 -- # read -r var val 00:09:25.421 05:06:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:25.421 05:06:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:09:25.421 05:06:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:25.421 00:09:25.421 real 0m5.735s 00:09:25.421 user 0m5.138s 00:09:25.421 sys 0m0.383s 00:09:25.421 05:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.421 ************************************ 00:09:25.421 END TEST accel_deomp_full_mthread 00:09:25.421 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:09:25.421 ************************************ 00:09:25.421 05:06:44 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:25.421 05:06:44 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:25.421 05:06:44 -- accel/accel.sh@129 -- # build_accel_config 00:09:25.421 05:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:25.421 05:06:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:25.421 05:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:25.421 05:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.421 05:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:25.421 05:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:25.421 05:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:25.421 05:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:09:25.421 05:06:44 -- accel/accel.sh@42 -- # jq -r . 00:09:25.421 05:06:44 -- common/autotest_common.sh@10 -- # set +x 00:09:25.421 ************************************ 00:09:25.421 START TEST accel_dif_functional_tests 00:09:25.421 ************************************ 00:09:25.421 05:06:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:25.421 [2024-07-26 05:06:44.218568] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:25.421 [2024-07-26 05:06:44.218722] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:09:25.421 [2024-07-26 05:06:44.406428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.680 [2024-07-26 05:06:44.699934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.680 [2024-07-26 05:06:44.700025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.680 [2024-07-26 05:06:44.700058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.249 00:09:26.249 00:09:26.249 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.249 http://cunit.sourceforge.net/ 00:09:26.249 00:09:26.249 00:09:26.249 Suite: accel_dif 00:09:26.249 Test: verify: DIF generated, GUARD check ...passed 00:09:26.249 Test: verify: DIF generated, APPTAG check ...passed 00:09:26.249 Test: verify: DIF generated, REFTAG check ...passed 00:09:26.249 Test: verify: DIF not generated, GUARD check ...passed 00:09:26.249 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 05:06:45.101136] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:26.249 [2024-07-26 05:06:45.101299] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:26.249 [2024-07-26 05:06:45.101369] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:26.249 passed 00:09:26.249 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 05:06:45.101502] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:26.249 passed 00:09:26.249 Test: verify: APPTAG correct, APPTAG check ...[2024-07-26 05:06:45.101553] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:26.249 [2024-07-26 05:06:45.101672] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:26.249 passed 00:09:26.249 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:26.249 Test: verify: APPTAG incorrect, no APPTAG check ...passed[2024-07-26 05:06:45.101778] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:26.249 00:09:26.249 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:26.249 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:26.249 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 05:06:45.102169] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:26.249 passed 00:09:26.249 Test: generate copy: DIF generated, GUARD check ...passed 00:09:26.249 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:26.249 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:26.249 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:26.249 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:26.249 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:26.249 Test: generate copy: iovecs-len validate ...passed 00:09:26.249 Test: generate copy: buffer alignment validate ...[2024-07-26 05:06:45.102837] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:26.249 passed 00:09:26.249 00:09:26.249 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.249 suites 1 1 n/a 0 0 00:09:26.249 tests 20 20 20 0 0 00:09:26.249 asserts 204 204 204 0 n/a 00:09:26.249 00:09:26.249 Elapsed time = 0.005 seconds 00:09:27.627 ************************************ 00:09:27.627 END TEST accel_dif_functional_tests 00:09:27.627 ************************************ 00:09:27.627 00:09:27.627 real 0m2.339s 00:09:27.627 user 0m4.545s 00:09:27.627 sys 0m0.270s 00:09:27.627 05:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.627 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.627 00:09:27.627 real 2m4.233s 00:09:27.627 user 2m15.142s 00:09:27.627 sys 0m10.365s 00:09:27.627 05:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.627 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.627 ************************************ 00:09:27.627 END TEST accel 00:09:27.627 ************************************ 00:09:27.627 05:06:46 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:27.627 05:06:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:27.627 05:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.627 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.627 ************************************ 00:09:27.627 START TEST accel_rpc 00:09:27.627 ************************************ 00:09:27.627 05:06:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:27.627 * Looking for test storage... 00:09:27.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:27.627 05:06:46 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:27.627 05:06:46 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=60998 00:09:27.627 05:06:46 -- accel/accel_rpc.sh@15 -- # waitforlisten 60998 00:09:27.627 05:06:46 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:27.627 05:06:46 -- common/autotest_common.sh@819 -- # '[' -z 60998 ']' 00:09:27.627 05:06:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.627 05:06:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.627 05:06:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.627 05:06:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.627 05:06:46 -- common/autotest_common.sh@10 -- # set +x 00:09:27.885 [2024-07-26 05:06:46.765161] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:27.885 [2024-07-26 05:06:46.765328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:09:27.885 [2024-07-26 05:06:46.947724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.145 [2024-07-26 05:06:47.185252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.145 [2024-07-26 05:06:47.185471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.713 05:06:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.713 05:06:47 -- common/autotest_common.sh@852 -- # return 0 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:28.713 05:06:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:28.713 05:06:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.713 05:06:47 -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 ************************************ 00:09:28.713 START TEST accel_assign_opcode 00:09:28.713 ************************************ 00:09:28.713 05:06:47 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:28.713 05:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.713 05:06:47 -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 [2024-07-26 05:06:47.582325] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:28.713 05:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:28.713 05:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.713 05:06:47 -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 [2024-07-26 05:06:47.590258] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:28.713 05:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.713 05:06:47 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:28.713 05:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.713 05:06:47 -- common/autotest_common.sh@10 -- # set +x 00:09:29.648 05:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:29.648 05:06:48 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:29.648 05:06:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:29.648 05:06:48 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:29.648 05:06:48 -- common/autotest_common.sh@10 -- # set +x 00:09:29.648 05:06:48 -- accel/accel_rpc.sh@42 -- # grep software 00:09:29.648 05:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:29.648 software 00:09:29.648 ************************************ 00:09:29.648 END TEST accel_assign_opcode 00:09:29.648 ************************************ 00:09:29.648 00:09:29.649 real 0m1.020s 00:09:29.649 user 0m0.053s 00:09:29.649 sys 0m0.011s 00:09:29.649 05:06:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.649 05:06:48 -- common/autotest_common.sh@10 -- # set +x 00:09:29.649 05:06:48 -- accel/accel_rpc.sh@55 -- # killprocess 60998 00:09:29.649 05:06:48 -- common/autotest_common.sh@926 -- # '[' -z 60998 ']' 00:09:29.649 05:06:48 -- common/autotest_common.sh@930 -- # kill -0 60998 00:09:29.649 05:06:48 -- common/autotest_common.sh@931 -- # uname 00:09:29.649 05:06:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:29.649 05:06:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60998 00:09:29.649 killing process with pid 60998 00:09:29.649 05:06:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:29.649 05:06:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:29.649 05:06:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60998' 00:09:29.649 05:06:48 -- common/autotest_common.sh@945 -- # kill 60998 00:09:29.649 05:06:48 -- common/autotest_common.sh@950 -- # wait 60998 00:09:32.247 00:09:32.247 real 0m4.695s 00:09:32.247 user 0m4.534s 00:09:32.247 sys 0m0.602s 00:09:32.247 ************************************ 00:09:32.247 END TEST accel_rpc 00:09:32.247 ************************************ 00:09:32.247 05:06:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.247 05:06:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.247 05:06:51 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:32.247 05:06:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.247 05:06:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.247 05:06:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.247 ************************************ 00:09:32.247 START TEST app_cmdline 00:09:32.247 ************************************ 00:09:32.247 05:06:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:32.505 * Looking for test storage... 00:09:32.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:32.505 05:06:51 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:32.505 05:06:51 -- app/cmdline.sh@17 -- # spdk_tgt_pid=61123 00:09:32.505 05:06:51 -- app/cmdline.sh@18 -- # waitforlisten 61123 00:09:32.505 05:06:51 -- common/autotest_common.sh@819 -- # '[' -z 61123 ']' 00:09:32.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.505 05:06:51 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:32.505 05:06:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.505 05:06:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.505 05:06:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.505 05:06:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.505 05:06:51 -- common/autotest_common.sh@10 -- # set +x 00:09:32.505 [2024-07-26 05:06:51.505824] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:32.505 [2024-07-26 05:06:51.505987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:09:32.764 [2024-07-26 05:06:51.682471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.023 [2024-07-26 05:06:51.934466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.023 [2024-07-26 05:06:51.934689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.399 05:06:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.399 05:06:53 -- common/autotest_common.sh@852 -- # return 0 00:09:34.399 05:06:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:34.399 { 00:09:34.399 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:09:34.399 "fields": { 00:09:34.399 "major": 24, 00:09:34.399 "minor": 1, 00:09:34.399 "patch": 1, 00:09:34.399 "suffix": "-pre", 00:09:34.399 "commit": "dbef7efac" 00:09:34.399 } 00:09:34.399 } 00:09:34.399 05:06:53 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:34.399 05:06:53 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:34.399 05:06:53 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:34.399 05:06:53 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:34.399 05:06:53 -- app/cmdline.sh@26 -- # sort 00:09:34.399 05:06:53 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:34.399 05:06:53 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:34.399 05:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:34.399 05:06:53 -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 05:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:34.399 05:06:53 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:34.399 05:06:53 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:34.399 05:06:53 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.399 05:06:53 -- common/autotest_common.sh@640 -- # local es=0 00:09:34.399 05:06:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.399 05:06:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.399 05:06:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.399 05:06:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.399 05:06:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.399 05:06:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.399 05:06:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:34.399 05:06:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.399 05:06:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:34.399 05:06:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:34.658 request: 00:09:34.658 { 00:09:34.658 "method": "env_dpdk_get_mem_stats", 00:09:34.658 "req_id": 1 00:09:34.658 } 00:09:34.658 Got JSON-RPC error response 00:09:34.658 response: 00:09:34.658 { 00:09:34.658 "code": -32601, 00:09:34.658 "message": "Method not found" 00:09:34.658 } 00:09:34.658 05:06:53 -- common/autotest_common.sh@643 -- # es=1 00:09:34.658 05:06:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:34.658 05:06:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:34.658 05:06:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:34.658 05:06:53 -- app/cmdline.sh@1 -- # killprocess 61123 00:09:34.658 05:06:53 -- common/autotest_common.sh@926 -- # '[' -z 61123 ']' 00:09:34.658 05:06:53 -- common/autotest_common.sh@930 -- # kill -0 61123 00:09:34.658 05:06:53 -- common/autotest_common.sh@931 -- # uname 00:09:34.658 05:06:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:34.658 05:06:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61123 00:09:34.658 killing process with pid 61123 00:09:34.658 05:06:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:34.658 05:06:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:34.658 05:06:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61123' 00:09:34.658 05:06:53 -- common/autotest_common.sh@945 -- # kill 61123 00:09:34.658 05:06:53 -- common/autotest_common.sh@950 -- # wait 61123 00:09:37.190 ************************************ 00:09:37.190 END TEST app_cmdline 00:09:37.190 ************************************ 00:09:37.190 00:09:37.190 real 0m4.947s 00:09:37.190 user 0m5.412s 00:09:37.190 sys 0m0.665s 00:09:37.190 05:06:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.190 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.190 05:06:56 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:37.190 05:06:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:37.190 05:06:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.190 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 ************************************ 00:09:37.449 START TEST version 00:09:37.449 ************************************ 00:09:37.449 05:06:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:37.449 * Looking for test storage... 00:09:37.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:37.449 05:06:56 -- app/version.sh@17 -- # get_header_version major 00:09:37.449 05:06:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.449 05:06:56 -- app/version.sh@14 -- # cut -f2 00:09:37.449 05:06:56 -- app/version.sh@14 -- # tr -d '"' 00:09:37.449 05:06:56 -- app/version.sh@17 -- # major=24 00:09:37.449 05:06:56 -- app/version.sh@18 -- # get_header_version minor 00:09:37.449 05:06:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.449 05:06:56 -- app/version.sh@14 -- # cut -f2 00:09:37.449 05:06:56 -- app/version.sh@14 -- # tr -d '"' 00:09:37.449 05:06:56 -- app/version.sh@18 -- # minor=1 00:09:37.449 05:06:56 -- app/version.sh@19 -- # get_header_version patch 00:09:37.449 05:06:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.449 05:06:56 -- app/version.sh@14 -- # cut -f2 00:09:37.449 05:06:56 -- app/version.sh@14 -- # tr -d '"' 00:09:37.449 05:06:56 -- app/version.sh@19 -- # patch=1 00:09:37.449 05:06:56 -- app/version.sh@20 -- # get_header_version suffix 00:09:37.449 05:06:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:37.449 05:06:56 -- app/version.sh@14 -- # cut -f2 00:09:37.449 05:06:56 -- app/version.sh@14 -- # tr -d '"' 00:09:37.449 05:06:56 -- app/version.sh@20 -- # suffix=-pre 00:09:37.449 05:06:56 -- app/version.sh@22 -- # version=24.1 00:09:37.449 05:06:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:37.449 05:06:56 -- app/version.sh@25 -- # version=24.1.1 00:09:37.449 05:06:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:37.449 05:06:56 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:37.449 05:06:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:37.449 05:06:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:37.449 05:06:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:37.449 00:09:37.449 real 0m0.172s 00:09:37.449 user 0m0.092s 00:09:37.449 sys 0m0.120s 00:09:37.449 ************************************ 00:09:37.449 END TEST version 00:09:37.449 ************************************ 00:09:37.449 05:06:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.449 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 05:06:56 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:09:37.449 05:06:56 -- spdk/autotest.sh@204 -- # uname -s 00:09:37.449 05:06:56 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:09:37.449 05:06:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:37.449 05:06:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:09:37.449 05:06:56 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:09:37.449 05:06:56 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:37.449 05:06:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:37.449 05:06:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.449 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 ************************************ 00:09:37.449 START TEST blockdev_nvme 00:09:37.449 ************************************ 00:09:37.449 05:06:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:37.707 * Looking for test storage... 00:09:37.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:37.707 05:06:56 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:37.707 05:06:56 -- bdev/nbd_common.sh@6 -- # set -e 00:09:37.707 05:06:56 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:37.707 05:06:56 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:37.707 05:06:56 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:37.707 05:06:56 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:37.707 05:06:56 -- bdev/blockdev.sh@18 -- # : 00:09:37.707 05:06:56 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:37.707 05:06:56 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:37.707 05:06:56 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:37.707 05:06:56 -- bdev/blockdev.sh@672 -- # uname -s 00:09:37.707 05:06:56 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:37.707 05:06:56 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:37.707 05:06:56 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:09:37.707 05:06:56 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:37.707 05:06:56 -- bdev/blockdev.sh@682 -- # dek= 00:09:37.707 05:06:56 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:37.707 05:06:56 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:37.707 05:06:56 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:37.707 05:06:56 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:09:37.707 05:06:56 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:09:37.707 05:06:56 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:37.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.707 05:06:56 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61297 00:09:37.707 05:06:56 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:37.707 05:06:56 -- bdev/blockdev.sh@47 -- # waitforlisten 61297 00:09:37.707 05:06:56 -- common/autotest_common.sh@819 -- # '[' -z 61297 ']' 00:09:37.707 05:06:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.707 05:06:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.707 05:06:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.707 05:06:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.707 05:06:56 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:37.707 05:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.707 [2024-07-26 05:06:56.768741] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:37.707 [2024-07-26 05:06:56.768899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:09:37.965 [2024-07-26 05:06:56.948703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.223 [2024-07-26 05:06:57.199228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.223 [2024-07-26 05:06:57.199441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.158 05:06:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.158 05:06:58 -- common/autotest_common.sh@852 -- # return 0 00:09:39.158 05:06:58 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:39.158 05:06:58 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:09:39.158 05:06:58 -- bdev/blockdev.sh@79 -- # local json 00:09:39.158 05:06:58 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:39.158 05:06:58 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.417 05:06:58 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:09:39.417 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.417 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.676 05:06:58 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:39.676 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.676 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.676 05:06:58 -- bdev/blockdev.sh@738 -- # cat 00:09:39.676 05:06:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:39.676 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.676 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.676 05:06:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:39.676 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.676 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.676 05:06:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:39.676 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.676 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.676 05:06:58 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:39.676 05:06:58 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:39.676 05:06:58 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:39.676 05:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:39.676 05:06:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.676 05:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:39.935 05:06:58 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:39.935 05:06:58 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:39.936 05:06:58 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "854f3c78-5d4b-418b-a53e-6c862e111da4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "854f3c78-5d4b-418b-a53e-6c862e111da4",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5c8d68f9-c49c-461a-bf29-f8cf5a2e8b85"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5c8d68f9-c49c-461a-bf29-f8cf5a2e8b85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "1dcbcb67-d6f1-49b4-8d16-9c7ef788c424"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1dcbcb67-d6f1-49b4-8d16-9c7ef788c424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "515789e1-159e-49f4-b5e0-57bb9baa963a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "515789e1-159e-49f4-b5e0-57bb9baa963a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "94bf42c0-b812-4dbe-9561-666276bde203"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94bf42c0-b812-4dbe-9561-666276bde203",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "41455e15-85f7-4ef8-9dda-b5e311bbabd1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "41455e15-85f7-4ef8-9dda-b5e311bbabd1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:39.936 05:06:58 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:39.936 05:06:58 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:09:39.936 05:06:58 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:39.936 05:06:58 -- bdev/blockdev.sh@752 -- # killprocess 61297 00:09:39.936 05:06:58 -- common/autotest_common.sh@926 -- # '[' -z 61297 ']' 00:09:39.936 05:06:58 -- common/autotest_common.sh@930 -- # kill -0 61297 00:09:39.936 05:06:58 -- common/autotest_common.sh@931 -- # uname 00:09:39.936 05:06:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.936 05:06:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61297 00:09:39.936 killing process with pid 61297 00:09:39.936 05:06:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:39.936 05:06:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:39.936 05:06:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61297' 00:09:39.936 05:06:58 -- common/autotest_common.sh@945 -- # kill 61297 00:09:39.936 05:06:58 -- common/autotest_common.sh@950 -- # wait 61297 00:09:42.498 05:07:01 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:42.498 05:07:01 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:42.498 05:07:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:42.498 05:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.498 05:07:01 -- common/autotest_common.sh@10 -- # set +x 00:09:42.498 ************************************ 00:09:42.498 START TEST bdev_hello_world 00:09:42.498 ************************************ 00:09:42.498 05:07:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:42.757 [2024-07-26 05:07:01.630864] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:42.757 [2024-07-26 05:07:01.630987] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61405 ] 00:09:42.757 [2024-07-26 05:07:01.794519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.016 [2024-07-26 05:07:02.047124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.952 [2024-07-26 05:07:02.795503] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:43.952 [2024-07-26 05:07:02.795553] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:43.952 [2024-07-26 05:07:02.795594] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:43.952 [2024-07-26 05:07:02.798941] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:43.952 [2024-07-26 05:07:02.799472] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:43.952 [2024-07-26 05:07:02.799502] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:43.952 [2024-07-26 05:07:02.799749] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:43.952 00:09:43.952 [2024-07-26 05:07:02.799772] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:45.328 00:09:45.328 real 0m2.513s 00:09:45.328 user 0m2.151s 00:09:45.328 sys 0m0.250s 00:09:45.328 05:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.328 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.328 ************************************ 00:09:45.329 END TEST bdev_hello_world 00:09:45.329 ************************************ 00:09:45.329 05:07:04 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:45.329 05:07:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:45.329 05:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:45.329 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 ************************************ 00:09:45.329 START TEST bdev_bounds 00:09:45.329 ************************************ 00:09:45.329 05:07:04 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:45.329 Process bdevio pid: 61447 00:09:45.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.329 05:07:04 -- bdev/blockdev.sh@288 -- # bdevio_pid=61447 00:09:45.329 05:07:04 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:45.329 05:07:04 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 61447' 00:09:45.329 05:07:04 -- bdev/blockdev.sh@291 -- # waitforlisten 61447 00:09:45.329 05:07:04 -- common/autotest_common.sh@819 -- # '[' -z 61447 ']' 00:09:45.329 05:07:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.329 05:07:04 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:45.329 05:07:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.329 05:07:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.329 05:07:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.329 05:07:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.329 [2024-07-26 05:07:04.242381] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:45.329 [2024-07-26 05:07:04.242545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61447 ] 00:09:45.329 [2024-07-26 05:07:04.424516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.588 [2024-07-26 05:07:04.682690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.588 [2024-07-26 05:07:04.682873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.588 [2024-07-26 05:07:04.682907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.964 05:07:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.964 05:07:05 -- common/autotest_common.sh@852 -- # return 0 00:09:46.964 05:07:05 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:46.964 I/O targets: 00:09:46.964 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:46.964 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:46.964 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.964 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.964 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.964 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:46.964 00:09:46.964 00:09:46.964 CUnit - A unit testing framework for C - Version 2.1-3 00:09:46.964 http://cunit.sourceforge.net/ 00:09:46.964 00:09:46.964 00:09:46.964 Suite: bdevio tests on: Nvme3n1 00:09:46.964 Test: blockdev write read block ...passed 00:09:46.964 Test: blockdev write zeroes read block ...passed 00:09:46.964 Test: blockdev write zeroes read no split ...passed 00:09:46.964 Test: blockdev write zeroes read split ...passed 00:09:46.964 Test: blockdev write zeroes read split partial ...passed 00:09:46.964 Test: blockdev reset ...[2024-07-26 05:07:05.973603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:46.964 [2024-07-26 05:07:05.977884] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:46.964 passed 00:09:46.964 Test: blockdev write read 8 blocks ...passed 00:09:46.964 Test: blockdev write read size > 128k ...passed 00:09:46.964 Test: blockdev write read invalid size ...passed 00:09:46.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.964 Test: blockdev write read max offset ...passed 00:09:46.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.964 Test: blockdev writev readv 8 blocks ...passed 00:09:46.964 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.964 Test: blockdev writev readv block ...passed 00:09:46.964 Test: blockdev writev readv size > 128k ...passed 00:09:46.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.964 Test: blockdev comparev and writev ...[2024-07-26 05:07:05.988044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27620e000 len:0x1000 00:09:46.964 [2024-07-26 05:07:05.988103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.964 passed 00:09:46.964 Test: blockdev nvme passthru rw ...passed 00:09:46.964 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:07:05.988963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.964 [2024-07-26 05:07:05.989000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.964 passed 00:09:46.964 Test: blockdev nvme admin passthru ...passed 00:09:46.964 Test: blockdev copy ...passed 00:09:46.964 Suite: bdevio tests on: Nvme2n3 00:09:46.964 Test: blockdev write read block ...passed 00:09:46.964 Test: blockdev write zeroes read block ...passed 00:09:46.964 Test: blockdev write zeroes read no split ...passed 00:09:46.964 Test: blockdev write zeroes read split ...passed 00:09:46.964 Test: blockdev write zeroes read split partial ...passed 00:09:46.964 Test: blockdev reset ...[2024-07-26 05:07:06.072349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:47.224 [2024-07-26 05:07:06.076523] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.224 passed 00:09:47.224 Test: blockdev write read 8 blocks ...passed 00:09:47.224 Test: blockdev write read size > 128k ...passed 00:09:47.224 Test: blockdev write read invalid size ...passed 00:09:47.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.224 Test: blockdev write read max offset ...passed 00:09:47.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.224 Test: blockdev writev readv 8 blocks ...passed 00:09:47.224 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.224 Test: blockdev writev readv block ...passed 00:09:47.224 Test: blockdev writev readv size > 128k ...passed 00:09:47.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.224 Test: blockdev comparev and writev ...[2024-07-26 05:07:06.086285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27620a000 len:0x1000 00:09:47.224 [2024-07-26 05:07:06.086493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:47.224 passed 00:09:47.224 Test: blockdev nvme passthru rw ...passed 00:09:47.224 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:07:06.087505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:47.224 [2024-07-26 05:07:06.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:47.224 passed 00:09:47.224 Test: blockdev nvme admin passthru ...passed 00:09:47.224 Test: blockdev copy ...passed 00:09:47.224 Suite: bdevio tests on: Nvme2n2 00:09:47.224 Test: blockdev write read block ...passed 00:09:47.224 Test: blockdev write zeroes read block ...passed 00:09:47.224 Test: blockdev write zeroes read no split ...passed 00:09:47.224 Test: blockdev write zeroes read split ...passed 00:09:47.224 Test: blockdev write zeroes read split partial ...passed 00:09:47.224 Test: blockdev reset ...[2024-07-26 05:07:06.167945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:47.224 [2024-07-26 05:07:06.172246] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.224 passed 00:09:47.224 Test: blockdev write read 8 blocks ...passed 00:09:47.224 Test: blockdev write read size > 128k ...passed 00:09:47.224 Test: blockdev write read invalid size ...passed 00:09:47.224 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.224 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.224 Test: blockdev write read max offset ...passed 00:09:47.224 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.224 Test: blockdev writev readv 8 blocks ...passed 00:09:47.224 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.224 Test: blockdev writev readv block ...passed 00:09:47.224 Test: blockdev writev readv size > 128k ...passed 00:09:47.224 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.224 Test: blockdev comparev and writev ...[2024-07-26 05:07:06.181268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a006000 len:0x1000 00:09:47.224 [2024-07-26 05:07:06.181318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:47.224 passed 00:09:47.224 Test: blockdev nvme passthru rw ...passed 00:09:47.224 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:07:06.182130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:47.224 [2024-07-26 05:07:06.182169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:47.224 passed 00:09:47.224 Test: blockdev nvme admin passthru ...passed 00:09:47.224 Test: blockdev copy ...passed 00:09:47.224 Suite: bdevio tests on: Nvme2n1 00:09:47.224 Test: blockdev write read block ...passed 00:09:47.225 Test: blockdev write zeroes read block ...passed 00:09:47.225 Test: blockdev write zeroes read no split ...passed 00:09:47.225 Test: blockdev write zeroes read split ...passed 00:09:47.225 Test: blockdev write zeroes read split partial ...passed 00:09:47.225 Test: blockdev reset ...[2024-07-26 05:07:06.263086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:47.225 [2024-07-26 05:07:06.267617] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.225 passed 00:09:47.225 Test: blockdev write read 8 blocks ...passed 00:09:47.225 Test: blockdev write read size > 128k ...passed 00:09:47.225 Test: blockdev write read invalid size ...passed 00:09:47.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.225 Test: blockdev write read max offset ...passed 00:09:47.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.225 Test: blockdev writev readv 8 blocks ...passed 00:09:47.225 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.225 Test: blockdev writev readv block ...passed 00:09:47.225 Test: blockdev writev readv size > 128k ...passed 00:09:47.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.225 Test: blockdev comparev and writev ...[2024-07-26 05:07:06.277395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26a001000 len:0x1000 00:09:47.225 [2024-07-26 05:07:06.277622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:47.225 passed 00:09:47.225 Test: blockdev nvme passthru rw ...passed 00:09:47.225 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:07:06.278791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:47.225 passed 00:09:47.225 Test: blockdev nvme admin passthru ...[2024-07-26 05:07:06.278960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:47.225 passed 00:09:47.225 Test: blockdev copy ...passed 00:09:47.225 Suite: bdevio tests on: Nvme1n1 00:09:47.225 Test: blockdev write read block ...passed 00:09:47.225 Test: blockdev write zeroes read block ...passed 00:09:47.225 Test: blockdev write zeroes read no split ...passed 00:09:47.225 Test: blockdev write zeroes read split ...passed 00:09:47.484 Test: blockdev write zeroes read split partial ...passed 00:09:47.484 Test: blockdev reset ...[2024-07-26 05:07:06.359528] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:47.484 [2024-07-26 05:07:06.363654] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.484 passed 00:09:47.484 Test: blockdev write read 8 blocks ...passed 00:09:47.484 Test: blockdev write read size > 128k ...passed 00:09:47.484 Test: blockdev write read invalid size ...passed 00:09:47.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.484 Test: blockdev write read max offset ...passed 00:09:47.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.484 Test: blockdev writev readv 8 blocks ...passed 00:09:47.484 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.484 Test: blockdev writev readv block ...passed 00:09:47.484 Test: blockdev writev readv size > 128k ...passed 00:09:47.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.484 Test: blockdev comparev and writev ...[2024-07-26 05:07:06.372611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x279c06000 len:0x1000 00:09:47.484 [2024-07-26 05:07:06.372662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:47.484 passed 00:09:47.484 Test: blockdev nvme passthru rw ...passed 00:09:47.484 Test: blockdev nvme passthru vendor specific ...passed 00:09:47.484 Test: blockdev nvme admin passthru ...[2024-07-26 05:07:06.373545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:47.484 [2024-07-26 05:07:06.373601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:47.484 passed 00:09:47.484 Test: blockdev copy ...passed 00:09:47.484 Suite: bdevio tests on: Nvme0n1 00:09:47.484 Test: blockdev write read block ...passed 00:09:47.484 Test: blockdev write zeroes read block ...passed 00:09:47.484 Test: blockdev write zeroes read no split ...passed 00:09:47.484 Test: blockdev write zeroes read split ...passed 00:09:47.484 Test: blockdev write zeroes read split partial ...passed 00:09:47.484 Test: blockdev reset ...[2024-07-26 05:07:06.491505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:47.484 [2024-07-26 05:07:06.495494] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.484 passed 00:09:47.484 Test: blockdev write read 8 blocks ...passed 00:09:47.484 Test: blockdev write read size > 128k ...passed 00:09:47.484 Test: blockdev write read invalid size ...passed 00:09:47.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.484 Test: blockdev write read max offset ...passed 00:09:47.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.484 Test: blockdev writev readv 8 blocks ...passed 00:09:47.484 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.484 Test: blockdev writev readv block ...passed 00:09:47.484 Test: blockdev writev readv size > 128k ...passed 00:09:47.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.484 Test: blockdev comparev and writev ...passed[2024-07-26 05:07:06.506063] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:47.484 separate metadata which is not supported yet. 00:09:47.484 00:09:47.484 Test: blockdev nvme passthru rw ...passed 00:09:47.484 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:07:06.507046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:47.484 [2024-07-26 05:07:06.507275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:47.484 passed 00:09:47.484 Test: blockdev nvme admin passthru ...passed 00:09:47.484 Test: blockdev copy ...passed 00:09:47.484 00:09:47.484 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.484 suites 6 6 n/a 0 0 00:09:47.484 tests 138 138 138 0 0 00:09:47.484 asserts 893 893 893 0 n/a 00:09:47.484 00:09:47.484 Elapsed time = 1.701 seconds 00:09:47.484 0 00:09:47.484 05:07:06 -- bdev/blockdev.sh@293 -- # killprocess 61447 00:09:47.484 05:07:06 -- common/autotest_common.sh@926 -- # '[' -z 61447 ']' 00:09:47.484 05:07:06 -- common/autotest_common.sh@930 -- # kill -0 61447 00:09:47.484 05:07:06 -- common/autotest_common.sh@931 -- # uname 00:09:47.484 05:07:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:47.484 05:07:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61447 00:09:47.484 05:07:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:47.484 killing process with pid 61447 00:09:47.484 05:07:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:47.484 05:07:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61447' 00:09:47.484 05:07:06 -- common/autotest_common.sh@945 -- # kill 61447 00:09:47.484 05:07:06 -- common/autotest_common.sh@950 -- # wait 61447 00:09:48.861 05:07:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:48.861 00:09:48.861 real 0m3.607s 00:09:48.861 user 0m9.215s 00:09:48.861 sys 0m0.448s 00:09:48.861 ************************************ 00:09:48.861 END TEST bdev_bounds 00:09:48.861 ************************************ 00:09:48.861 05:07:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.861 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 05:07:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:48.861 05:07:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:48.861 05:07:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.861 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 ************************************ 00:09:48.861 START TEST bdev_nbd 00:09:48.861 ************************************ 00:09:48.862 05:07:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:48.862 05:07:07 -- bdev/blockdev.sh@298 -- # uname -s 00:09:48.862 05:07:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:48.862 05:07:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.862 05:07:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:48.862 05:07:07 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.862 05:07:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:48.862 05:07:07 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:09:48.862 05:07:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:48.862 05:07:07 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:48.862 05:07:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:48.862 05:07:07 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:09:48.862 05:07:07 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:48.862 05:07:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:48.862 05:07:07 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.862 05:07:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:48.862 05:07:07 -- bdev/blockdev.sh@316 -- # nbd_pid=61525 00:09:48.862 05:07:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:48.862 05:07:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:48.862 05:07:07 -- bdev/blockdev.sh@318 -- # waitforlisten 61525 /var/tmp/spdk-nbd.sock 00:09:48.862 05:07:07 -- common/autotest_common.sh@819 -- # '[' -z 61525 ']' 00:09:48.862 05:07:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:48.862 05:07:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.862 05:07:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:48.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:48.862 05:07:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.862 05:07:07 -- common/autotest_common.sh@10 -- # set +x 00:09:48.862 [2024-07-26 05:07:07.882018] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:48.862 [2024-07-26 05:07:07.882563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.119 [2024-07-26 05:07:08.066071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.378 [2024-07-26 05:07:08.331991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.756 05:07:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.756 05:07:09 -- common/autotest_common.sh@852 -- # return 0 00:09:50.756 05:07:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@24 -- # local i 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:50.756 05:07:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:50.756 05:07:09 -- common/autotest_common.sh@857 -- # local i 00:09:50.756 05:07:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:50.756 05:07:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:50.756 05:07:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:50.756 05:07:09 -- common/autotest_common.sh@861 -- # break 00:09:50.756 05:07:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:50.756 05:07:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:50.756 05:07:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.756 1+0 records in 00:09:50.756 1+0 records out 00:09:50.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379698 s, 10.8 MB/s 00:09:50.756 05:07:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.756 05:07:09 -- common/autotest_common.sh@874 -- # size=4096 00:09:50.756 05:07:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.756 05:07:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:50.756 05:07:09 -- common/autotest_common.sh@877 -- # return 0 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:50.756 05:07:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:51.014 05:07:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:51.014 05:07:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:51.014 05:07:09 -- common/autotest_common.sh@857 -- # local i 00:09:51.014 05:07:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:51.014 05:07:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:51.014 05:07:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:51.014 05:07:09 -- common/autotest_common.sh@861 -- # break 00:09:51.014 05:07:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:51.015 05:07:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:51.015 05:07:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.015 1+0 records in 00:09:51.015 1+0 records out 00:09:51.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041845 s, 9.8 MB/s 00:09:51.015 05:07:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.015 05:07:09 -- common/autotest_common.sh@874 -- # size=4096 00:09:51.015 05:07:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.015 05:07:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:51.015 05:07:09 -- common/autotest_common.sh@877 -- # return 0 00:09:51.015 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.015 05:07:09 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:51.015 05:07:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:51.015 05:07:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:09:51.015 05:07:10 -- common/autotest_common.sh@857 -- # local i 00:09:51.015 05:07:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:51.015 05:07:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:51.015 05:07:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:09:51.015 05:07:10 -- common/autotest_common.sh@861 -- # break 00:09:51.015 05:07:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:51.015 05:07:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:51.015 05:07:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.015 1+0 records in 00:09:51.015 1+0 records out 00:09:51.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653969 s, 6.3 MB/s 00:09:51.015 05:07:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.015 05:07:10 -- common/autotest_common.sh@874 -- # size=4096 00:09:51.015 05:07:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.015 05:07:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:51.015 05:07:10 -- common/autotest_common.sh@877 -- # return 0 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:51.015 05:07:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:51.274 05:07:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:09:51.274 05:07:10 -- common/autotest_common.sh@857 -- # local i 00:09:51.274 05:07:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:51.274 05:07:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:51.274 05:07:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:09:51.274 05:07:10 -- common/autotest_common.sh@861 -- # break 00:09:51.274 05:07:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:51.274 05:07:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:51.274 05:07:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.274 1+0 records in 00:09:51.274 1+0 records out 00:09:51.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550545 s, 7.4 MB/s 00:09:51.274 05:07:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.274 05:07:10 -- common/autotest_common.sh@874 -- # size=4096 00:09:51.274 05:07:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.274 05:07:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:51.274 05:07:10 -- common/autotest_common.sh@877 -- # return 0 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:51.274 05:07:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:51.532 05:07:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:51.532 05:07:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:51.532 05:07:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:51.532 05:07:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:09:51.532 05:07:10 -- common/autotest_common.sh@857 -- # local i 00:09:51.532 05:07:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:51.532 05:07:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:51.532 05:07:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:09:51.532 05:07:10 -- common/autotest_common.sh@861 -- # break 00:09:51.532 05:07:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:51.532 05:07:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:51.532 05:07:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.532 1+0 records in 00:09:51.532 1+0 records out 00:09:51.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953552 s, 4.3 MB/s 00:09:51.532 05:07:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.532 05:07:10 -- common/autotest_common.sh@874 -- # size=4096 00:09:51.533 05:07:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.533 05:07:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:51.533 05:07:10 -- common/autotest_common.sh@877 -- # return 0 00:09:51.533 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.533 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:51.533 05:07:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:51.792 05:07:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:09:51.792 05:07:10 -- common/autotest_common.sh@857 -- # local i 00:09:51.792 05:07:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:51.792 05:07:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:51.792 05:07:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:09:51.792 05:07:10 -- common/autotest_common.sh@861 -- # break 00:09:51.792 05:07:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:51.792 05:07:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:51.792 05:07:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.792 1+0 records in 00:09:51.792 1+0 records out 00:09:51.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818325 s, 5.0 MB/s 00:09:51.792 05:07:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.792 05:07:10 -- common/autotest_common.sh@874 -- # size=4096 00:09:51.792 05:07:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.792 05:07:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:51.792 05:07:10 -- common/autotest_common.sh@877 -- # return 0 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:51.792 05:07:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd0", 00:09:52.051 "bdev_name": "Nvme0n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd1", 00:09:52.051 "bdev_name": "Nvme1n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd2", 00:09:52.051 "bdev_name": "Nvme2n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd3", 00:09:52.051 "bdev_name": "Nvme2n2" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd4", 00:09:52.051 "bdev_name": "Nvme2n3" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd5", 00:09:52.051 "bdev_name": "Nvme3n1" 00:09:52.051 } 00:09:52.051 ]' 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd0", 00:09:52.051 "bdev_name": "Nvme0n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd1", 00:09:52.051 "bdev_name": "Nvme1n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd2", 00:09:52.051 "bdev_name": "Nvme2n1" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd3", 00:09:52.051 "bdev_name": "Nvme2n2" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd4", 00:09:52.051 "bdev_name": "Nvme2n3" 00:09:52.051 }, 00:09:52.051 { 00:09:52.051 "nbd_device": "/dev/nbd5", 00:09:52.051 "bdev_name": "Nvme3n1" 00:09:52.051 } 00:09:52.051 ]' 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@51 -- # local i 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.051 05:07:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.310 05:07:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@41 -- # break 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@41 -- # break 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.568 05:07:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@41 -- # break 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.828 05:07:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:53.089 05:07:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@41 -- # break 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.090 05:07:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@41 -- # break 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@41 -- # break 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.348 05:07:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@65 -- # true 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@122 -- # count=0 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@127 -- # return 0 00:09:53.607 05:07:12 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@12 -- # local i 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:53.607 05:07:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:53.866 /dev/nbd0 00:09:54.125 05:07:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.125 05:07:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.125 05:07:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:54.125 05:07:12 -- common/autotest_common.sh@857 -- # local i 00:09:54.125 05:07:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.125 05:07:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.125 05:07:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:54.125 05:07:12 -- common/autotest_common.sh@861 -- # break 00:09:54.125 05:07:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.125 05:07:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.125 05:07:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.125 1+0 records in 00:09:54.125 1+0 records out 00:09:54.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643726 s, 6.4 MB/s 00:09:54.125 05:07:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.125 05:07:12 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.125 05:07:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.125 05:07:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.125 05:07:13 -- common/autotest_common.sh@877 -- # return 0 00:09:54.125 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.125 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:54.125 05:07:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:54.125 /dev/nbd1 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:54.384 05:07:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:54.384 05:07:13 -- common/autotest_common.sh@857 -- # local i 00:09:54.384 05:07:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:54.384 05:07:13 -- common/autotest_common.sh@861 -- # break 00:09:54.384 05:07:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.384 1+0 records in 00:09:54.384 1+0 records out 00:09:54.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046583 s, 8.8 MB/s 00:09:54.384 05:07:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.384 05:07:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.384 05:07:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.384 05:07:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.384 05:07:13 -- common/autotest_common.sh@877 -- # return 0 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:54.384 /dev/nbd10 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:54.384 05:07:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:09:54.384 05:07:13 -- common/autotest_common.sh@857 -- # local i 00:09:54.384 05:07:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:09:54.384 05:07:13 -- common/autotest_common.sh@861 -- # break 00:09:54.384 05:07:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.384 05:07:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.384 1+0 records in 00:09:54.384 1+0 records out 00:09:54.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352477 s, 11.6 MB/s 00:09:54.384 05:07:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.384 05:07:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.384 05:07:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.384 05:07:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.384 05:07:13 -- common/autotest_common.sh@877 -- # return 0 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:54.384 05:07:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:54.642 /dev/nbd11 00:09:54.642 05:07:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:54.642 05:07:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:54.642 05:07:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:09:54.642 05:07:13 -- common/autotest_common.sh@857 -- # local i 00:09:54.642 05:07:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.642 05:07:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.642 05:07:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:09:54.642 05:07:13 -- common/autotest_common.sh@861 -- # break 00:09:54.642 05:07:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.642 05:07:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.642 05:07:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.642 1+0 records in 00:09:54.642 1+0 records out 00:09:54.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613249 s, 6.7 MB/s 00:09:54.642 05:07:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.642 05:07:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.642 05:07:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.642 05:07:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.642 05:07:13 -- common/autotest_common.sh@877 -- # return 0 00:09:54.642 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.642 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:54.642 05:07:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:54.900 /dev/nbd12 00:09:54.900 05:07:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:54.900 05:07:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:54.900 05:07:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:09:54.900 05:07:13 -- common/autotest_common.sh@857 -- # local i 00:09:54.900 05:07:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.900 05:07:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.900 05:07:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:09:54.900 05:07:13 -- common/autotest_common.sh@861 -- # break 00:09:54.900 05:07:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.900 05:07:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.900 05:07:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.901 1+0 records in 00:09:54.901 1+0 records out 00:09:54.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816797 s, 5.0 MB/s 00:09:54.901 05:07:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.901 05:07:13 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.901 05:07:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:54.901 05:07:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.901 05:07:13 -- common/autotest_common.sh@877 -- # return 0 00:09:54.901 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.901 05:07:13 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:54.901 05:07:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:55.159 /dev/nbd13 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:55.159 05:07:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:09:55.159 05:07:14 -- common/autotest_common.sh@857 -- # local i 00:09:55.159 05:07:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:55.159 05:07:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:55.159 05:07:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:09:55.159 05:07:14 -- common/autotest_common.sh@861 -- # break 00:09:55.159 05:07:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:55.159 05:07:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:55.159 05:07:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.159 1+0 records in 00:09:55.159 1+0 records out 00:09:55.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643109 s, 6.4 MB/s 00:09:55.159 05:07:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.159 05:07:14 -- common/autotest_common.sh@874 -- # size=4096 00:09:55.159 05:07:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.159 05:07:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:55.159 05:07:14 -- common/autotest_common.sh@877 -- # return 0 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.159 05:07:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd0", 00:09:55.418 "bdev_name": "Nvme0n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd1", 00:09:55.418 "bdev_name": "Nvme1n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd10", 00:09:55.418 "bdev_name": "Nvme2n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd11", 00:09:55.418 "bdev_name": "Nvme2n2" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd12", 00:09:55.418 "bdev_name": "Nvme2n3" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd13", 00:09:55.418 "bdev_name": "Nvme3n1" 00:09:55.418 } 00:09:55.418 ]' 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd0", 00:09:55.418 "bdev_name": "Nvme0n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd1", 00:09:55.418 "bdev_name": "Nvme1n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd10", 00:09:55.418 "bdev_name": "Nvme2n1" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd11", 00:09:55.418 "bdev_name": "Nvme2n2" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd12", 00:09:55.418 "bdev_name": "Nvme2n3" 00:09:55.418 }, 00:09:55.418 { 00:09:55.418 "nbd_device": "/dev/nbd13", 00:09:55.418 "bdev_name": "Nvme3n1" 00:09:55.418 } 00:09:55.418 ]' 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:55.418 /dev/nbd1 00:09:55.418 /dev/nbd10 00:09:55.418 /dev/nbd11 00:09:55.418 /dev/nbd12 00:09:55.418 /dev/nbd13' 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:55.418 /dev/nbd1 00:09:55.418 /dev/nbd10 00:09:55.418 /dev/nbd11 00:09:55.418 /dev/nbd12 00:09:55.418 /dev/nbd13' 00:09:55.418 05:07:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@65 -- # count=6 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@95 -- # count=6 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:55.677 256+0 records in 00:09:55.677 256+0 records out 00:09:55.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614094 s, 171 MB/s 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:55.677 256+0 records in 00:09:55.677 256+0 records out 00:09:55.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125617 s, 8.3 MB/s 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.677 05:07:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:55.936 256+0 records in 00:09:55.936 256+0 records out 00:09:55.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129062 s, 8.1 MB/s 00:09:55.936 05:07:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.936 05:07:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:55.936 256+0 records in 00:09:55.936 256+0 records out 00:09:55.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129427 s, 8.1 MB/s 00:09:55.936 05:07:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:55.936 05:07:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:56.194 256+0 records in 00:09:56.194 256+0 records out 00:09:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130667 s, 8.0 MB/s 00:09:56.194 05:07:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.194 05:07:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:56.194 256+0 records in 00:09:56.194 256+0 records out 00:09:56.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131113 s, 8.0 MB/s 00:09:56.194 05:07:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.194 05:07:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:56.453 256+0 records in 00:09:56.453 256+0 records out 00:09:56.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131712 s, 8.0 MB/s 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@51 -- # local i 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.453 05:07:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@41 -- # break 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.712 05:07:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@41 -- # break 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.972 05:07:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@41 -- # break 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.231 05:07:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@41 -- # break 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.490 05:07:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@41 -- # break 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@41 -- # break 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@45 -- # return 0 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.749 05:07:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.008 05:07:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:58.008 05:07:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:58.008 05:07:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@65 -- # true 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@65 -- # count=0 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@104 -- # count=0 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@109 -- # return 0 00:09:58.267 05:07:17 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:58.267 05:07:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:58.526 malloc_lvol_verify 00:09:58.526 05:07:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:58.785 13efdcf4-960f-4f61-908a-66d5ce684e7b 00:09:58.785 05:07:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:58.785 4de89796-baf1-454b-aa9b-8ec38e41907f 00:09:58.785 05:07:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:59.044 /dev/nbd0 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:59.044 mke2fs 1.46.5 (30-Dec-2021) 00:09:59.044 Discarding device blocks: 0/4096 done 00:09:59.044 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:59.044 00:09:59.044 Allocating group tables: 0/1 done 00:09:59.044 Writing inode tables: 0/1 done 00:09:59.044 Creating journal (1024 blocks): done 00:09:59.044 Writing superblocks and filesystem accounting information: 0/1 done 00:09:59.044 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@51 -- # local i 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.044 05:07:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@41 -- # break 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:59.302 05:07:18 -- bdev/nbd_common.sh@147 -- # return 0 00:09:59.303 05:07:18 -- bdev/blockdev.sh@324 -- # killprocess 61525 00:09:59.303 05:07:18 -- common/autotest_common.sh@926 -- # '[' -z 61525 ']' 00:09:59.303 05:07:18 -- common/autotest_common.sh@930 -- # kill -0 61525 00:09:59.303 05:07:18 -- common/autotest_common.sh@931 -- # uname 00:09:59.303 05:07:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:59.303 05:07:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61525 00:09:59.303 killing process with pid 61525 00:09:59.303 05:07:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:59.303 05:07:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:59.303 05:07:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61525' 00:09:59.303 05:07:18 -- common/autotest_common.sh@945 -- # kill 61525 00:09:59.303 05:07:18 -- common/autotest_common.sh@950 -- # wait 61525 00:10:00.682 05:07:19 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:00.682 00:10:00.682 real 0m11.800s 00:10:00.682 user 0m15.515s 00:10:00.682 sys 0m4.332s 00:10:00.682 05:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.682 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:10:00.682 ************************************ 00:10:00.682 END TEST bdev_nbd 00:10:00.682 ************************************ 00:10:00.682 05:07:19 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:00.682 skipping fio tests on NVMe due to multi-ns failures. 00:10:00.682 05:07:19 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:00.682 05:07:19 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:00.682 05:07:19 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:00.682 05:07:19 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:00.682 05:07:19 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:00.682 05:07:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:00.682 05:07:19 -- common/autotest_common.sh@10 -- # set +x 00:10:00.682 ************************************ 00:10:00.682 START TEST bdev_verify 00:10:00.682 ************************************ 00:10:00.682 05:07:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:00.682 [2024-07-26 05:07:19.762192] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:00.682 [2024-07-26 05:07:19.762373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:10:00.941 [2024-07-26 05:07:19.945978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.200 [2024-07-26 05:07:20.182573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.200 [2024-07-26 05:07:20.182606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.138 Running I/O for 5 seconds... 00:10:07.408 00:10:07.408 Latency(us) 00:10:07.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.408 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.408 Verification LBA range: start 0x0 length 0xbd0bd 00:10:07.408 Nvme0n1 : 5.04 3035.80 11.86 0.00 0.00 42059.51 5430.13 50431.51 00:10:07.408 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.408 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:07.408 Nvme0n1 : 5.04 3034.47 11.85 0.00 0.00 42083.14 4431.48 53926.77 00:10:07.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.408 Verification LBA range: start 0x0 length 0xa0000 00:10:07.409 Nvme1n1 : 5.04 3034.98 11.86 0.00 0.00 42034.09 5898.24 48184.56 00:10:07.409 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0xa0000 length 0xa0000 00:10:07.409 Nvme1n1 : 5.05 3033.10 11.85 0.00 0.00 42059.63 5710.99 51679.82 00:10:07.409 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x0 length 0x80000 00:10:07.409 Nvme2n1 : 5.04 3034.25 11.85 0.00 0.00 41987.54 6210.32 42941.68 00:10:07.409 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x80000 length 0x80000 00:10:07.409 Nvme2n1 : 5.05 3037.87 11.87 0.00 0.00 41884.25 2761.87 38947.11 00:10:07.409 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x0 length 0x80000 00:10:07.409 Nvme2n2 : 5.05 3038.91 11.87 0.00 0.00 41885.91 2246.95 38947.11 00:10:07.409 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x80000 length 0x80000 00:10:07.409 Nvme2n2 : 5.05 3036.46 11.86 0.00 0.00 41847.23 4244.24 35701.52 00:10:07.409 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x0 length 0x80000 00:10:07.409 Nvme2n3 : 5.05 3038.14 11.87 0.00 0.00 41853.24 2777.48 37199.48 00:10:07.409 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x80000 length 0x80000 00:10:07.409 Nvme2n3 : 5.05 3035.31 11.86 0.00 0.00 41811.50 5367.71 34203.55 00:10:07.409 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x0 length 0x20000 00:10:07.409 Nvme3n1 : 5.05 3036.93 11.86 0.00 0.00 41819.51 3932.16 34952.53 00:10:07.409 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:07.409 Verification LBA range: start 0x20000 length 0x20000 00:10:07.409 Nvme3n1 : 5.06 3034.67 11.85 0.00 0.00 41792.98 5648.58 34702.87 00:10:07.409 =================================================================================================================== 00:10:07.409 Total : 36430.90 142.31 0.00 0.00 41926.42 2246.95 53926.77 00:10:17.384 00:10:17.384 real 0m16.707s 00:10:17.384 user 0m31.749s 00:10:17.384 sys 0m0.458s 00:10:17.384 05:07:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.384 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:10:17.384 ************************************ 00:10:17.384 END TEST bdev_verify 00:10:17.384 ************************************ 00:10:17.384 05:07:36 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:17.384 05:07:36 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:17.384 05:07:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.384 05:07:36 -- common/autotest_common.sh@10 -- # set +x 00:10:17.384 ************************************ 00:10:17.384 START TEST bdev_verify_big_io 00:10:17.384 ************************************ 00:10:17.384 05:07:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:17.643 [2024-07-26 05:07:36.526171] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:17.643 [2024-07-26 05:07:36.526404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62032 ] 00:10:17.643 [2024-07-26 05:07:36.709751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.902 [2024-07-26 05:07:36.945607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.902 [2024-07-26 05:07:36.945635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.838 Running I/O for 5 seconds... 00:10:25.404 00:10:25.404 Latency(us) 00:10:25.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.405 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0xbd0b 00:10:25.405 Nvme0n1 : 5.35 292.75 18.30 0.00 0.00 429669.77 40694.74 523289.36 00:10:25.405 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:25.405 Nvme0n1 : 5.38 274.94 17.18 0.00 0.00 460054.72 10236.10 555245.96 00:10:25.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0xa000 00:10:25.405 Nvme1n1 : 5.35 292.65 18.29 0.00 0.00 424974.80 39696.09 499321.90 00:10:25.405 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0xa000 length 0xa000 00:10:25.405 Nvme1n1 : 5.38 274.82 17.18 0.00 0.00 454825.44 9799.19 519294.78 00:10:25.405 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0x8000 00:10:25.405 Nvme2n1 : 5.36 292.54 18.28 0.00 0.00 420190.62 39696.09 453384.29 00:10:25.405 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x8000 length 0x8000 00:10:25.405 Nvme2n1 : 5.38 274.72 17.17 0.00 0.00 449685.13 9674.36 507311.06 00:10:25.405 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0x8000 00:10:25.405 Nvme2n2 : 5.36 300.24 18.77 0.00 0.00 407635.65 3417.23 423424.98 00:10:25.405 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x8000 length 0x8000 00:10:25.405 Nvme2n2 : 5.38 274.62 17.16 0.00 0.00 444515.48 9736.78 499321.90 00:10:25.405 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0x8000 00:10:25.405 Nvme2n3 : 5.36 300.13 18.76 0.00 0.00 402958.03 3900.95 427419.55 00:10:25.405 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x8000 length 0x8000 00:10:25.405 Nvme2n3 : 5.39 282.90 17.68 0.00 0.00 427900.55 2231.34 519294.78 00:10:25.405 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x0 length 0x2000 00:10:25.405 Nvme3n1 : 5.37 308.87 19.30 0.00 0.00 387910.15 2917.91 401454.81 00:10:25.405 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:25.405 Verification LBA range: start 0x2000 length 0x2000 00:10:25.405 Nvme3n1 : 5.39 282.79 17.67 0.00 0.00 422817.23 2746.27 535273.08 00:10:25.405 =================================================================================================================== 00:10:25.405 Total : 3451.97 215.75 0.00 0.00 427000.04 2231.34 555245.96 00:10:26.341 00:10:26.341 real 0m8.693s 00:10:26.341 user 0m15.881s 00:10:26.341 sys 0m0.331s 00:10:26.341 05:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.341 05:07:45 -- common/autotest_common.sh@10 -- # set +x 00:10:26.341 ************************************ 00:10:26.341 END TEST bdev_verify_big_io 00:10:26.341 ************************************ 00:10:26.341 05:07:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:26.341 05:07:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:26.341 05:07:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.341 05:07:45 -- common/autotest_common.sh@10 -- # set +x 00:10:26.341 ************************************ 00:10:26.341 START TEST bdev_write_zeroes 00:10:26.341 ************************************ 00:10:26.341 05:07:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:26.341 [2024-07-26 05:07:45.289089] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:26.341 [2024-07-26 05:07:45.289258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62141 ] 00:10:26.600 [2024-07-26 05:07:45.473273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.600 [2024-07-26 05:07:45.701819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.535 Running I/O for 1 seconds... 00:10:28.469 00:10:28.469 Latency(us) 00:10:28.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.469 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme0n1 : 1.01 9659.21 37.73 0.00 0.00 13215.86 8363.64 22594.32 00:10:28.469 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme1n1 : 1.01 9647.72 37.69 0.00 0.00 13213.84 8800.55 23343.30 00:10:28.469 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme2n1 : 1.02 9637.32 37.65 0.00 0.00 13164.64 8987.79 19473.55 00:10:28.469 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme2n2 : 1.02 9680.26 37.81 0.00 0.00 13079.04 6491.18 16477.62 00:10:28.469 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme2n3 : 1.02 9671.32 37.78 0.00 0.00 13068.98 6553.60 16352.79 00:10:28.469 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:28.469 Nvme3n1 : 1.02 9662.44 37.74 0.00 0.00 13057.42 6584.81 15915.89 00:10:28.469 =================================================================================================================== 00:10:28.469 Total : 57958.27 226.40 0.00 0.00 13133.09 6491.18 23343.30 00:10:29.847 00:10:29.847 real 0m3.598s 00:10:29.847 user 0m3.209s 00:10:29.847 sys 0m0.275s 00:10:29.847 05:07:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.847 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:10:29.847 ************************************ 00:10:29.847 END TEST bdev_write_zeroes 00:10:29.847 ************************************ 00:10:29.847 05:07:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.847 05:07:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:29.847 05:07:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.847 05:07:48 -- common/autotest_common.sh@10 -- # set +x 00:10:29.847 ************************************ 00:10:29.847 START TEST bdev_json_nonenclosed 00:10:29.847 ************************************ 00:10:29.847 05:07:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:29.847 [2024-07-26 05:07:48.952386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:29.848 [2024-07-26 05:07:48.952551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62200 ] 00:10:30.108 [2024-07-26 05:07:49.135678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.367 [2024-07-26 05:07:49.374093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.367 [2024-07-26 05:07:49.374259] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:30.367 [2024-07-26 05:07:49.374290] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:30.936 00:10:30.936 real 0m0.984s 00:10:30.936 user 0m0.712s 00:10:30.936 sys 0m0.165s 00:10:30.936 05:07:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.936 05:07:49 -- common/autotest_common.sh@10 -- # set +x 00:10:30.936 ************************************ 00:10:30.936 END TEST bdev_json_nonenclosed 00:10:30.936 ************************************ 00:10:30.936 05:07:49 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:30.936 05:07:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:30.936 05:07:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.936 05:07:49 -- common/autotest_common.sh@10 -- # set +x 00:10:30.936 ************************************ 00:10:30.936 START TEST bdev_json_nonarray 00:10:30.936 ************************************ 00:10:30.936 05:07:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:30.936 [2024-07-26 05:07:50.000792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:30.936 [2024-07-26 05:07:50.000959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62231 ] 00:10:31.194 [2024-07-26 05:07:50.185121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.453 [2024-07-26 05:07:50.424445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.453 [2024-07-26 05:07:50.424632] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:31.453 [2024-07-26 05:07:50.424657] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:32.022 00:10:32.022 real 0m0.982s 00:10:32.022 user 0m0.708s 00:10:32.022 sys 0m0.168s 00:10:32.022 05:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.022 ************************************ 00:10:32.022 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.022 END TEST bdev_json_nonarray 00:10:32.022 ************************************ 00:10:32.022 05:07:50 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:32.022 05:07:50 -- bdev/blockdev.sh@809 -- # cleanup 00:10:32.022 05:07:50 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:32.022 05:07:50 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:32.022 05:07:50 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:10:32.022 05:07:50 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:10:32.022 00:10:32.022 real 0m54.392s 00:10:32.022 user 1m24.392s 00:10:32.022 sys 0m7.466s 00:10:32.022 05:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.022 ************************************ 00:10:32.022 END TEST blockdev_nvme 00:10:32.022 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.022 ************************************ 00:10:32.022 05:07:50 -- spdk/autotest.sh@219 -- # uname -s 00:10:32.022 05:07:50 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:10:32.022 05:07:50 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:32.022 05:07:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:32.022 05:07:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.022 05:07:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.022 ************************************ 00:10:32.022 START TEST blockdev_nvme_gpt 00:10:32.022 ************************************ 00:10:32.022 05:07:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:32.022 * Looking for test storage... 00:10:32.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:32.022 05:07:51 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:32.022 05:07:51 -- bdev/nbd_common.sh@6 -- # set -e 00:10:32.022 05:07:51 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:32.022 05:07:51 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:32.022 05:07:51 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:32.022 05:07:51 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:32.022 05:07:51 -- bdev/blockdev.sh@18 -- # : 00:10:32.022 05:07:51 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:32.022 05:07:51 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:32.022 05:07:51 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:32.022 05:07:51 -- bdev/blockdev.sh@672 -- # uname -s 00:10:32.022 05:07:51 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:32.022 05:07:51 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:32.022 05:07:51 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:10:32.022 05:07:51 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:32.022 05:07:51 -- bdev/blockdev.sh@682 -- # dek= 00:10:32.022 05:07:51 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:32.022 05:07:51 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:32.022 05:07:51 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:32.022 05:07:51 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:10:32.022 05:07:51 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:10:32.022 05:07:51 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:32.022 05:07:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62306 00:10:32.022 05:07:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:32.022 05:07:51 -- bdev/blockdev.sh@47 -- # waitforlisten 62306 00:10:32.022 05:07:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:32.022 05:07:51 -- common/autotest_common.sh@819 -- # '[' -z 62306 ']' 00:10:32.022 05:07:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.022 05:07:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:32.022 05:07:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.022 05:07:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:32.022 05:07:51 -- common/autotest_common.sh@10 -- # set +x 00:10:32.282 [2024-07-26 05:07:51.240717] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:32.282 [2024-07-26 05:07:51.240880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62306 ] 00:10:32.541 [2024-07-26 05:07:51.425478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.800 [2024-07-26 05:07:51.660963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.800 [2024-07-26 05:07:51.661147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.737 05:07:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:33.737 05:07:52 -- common/autotest_common.sh@852 -- # return 0 00:10:33.737 05:07:52 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:33.737 05:07:52 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:10:33.737 05:07:52 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:34.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.563 Waiting for block devices as requested 00:10:34.563 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.563 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.822 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.822 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:10:40.094 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:10:40.094 05:07:58 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:10:40.094 05:07:58 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:10:40.094 05:07:58 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:10:40.094 05:07:58 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:10:40.094 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.094 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.094 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.094 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:10:40.094 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.094 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.095 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:10:40.095 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:10:40.095 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.095 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:10:40.095 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:10:40.095 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.095 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:10:40.095 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:10:40.095 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:10:40.095 05:07:58 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:10:40.095 05:07:58 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:10:40.095 05:07:58 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:40.095 05:07:58 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:10:40.095 05:07:58 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:10:40.095 05:07:58 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:10:40.095 05:07:58 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:10:40.095 05:07:58 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:40.095 05:07:58 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:10:40.095 05:07:58 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:10:40.095 05:07:58 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:10:40.095 05:07:58 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:10:40.095 BYT; 00:10:40.095 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:40.095 05:07:58 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:10:40.095 BYT; 00:10:40.095 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:40.095 05:07:58 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:10:40.095 05:07:58 -- bdev/blockdev.sh@114 -- # break 00:10:40.095 05:07:58 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:10:40.095 05:07:58 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:40.095 05:07:58 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:40.095 05:07:58 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:40.095 05:07:58 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:10:40.095 05:07:58 -- scripts/common.sh@410 -- # local spdk_guid 00:10:40.095 05:07:58 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:40.095 05:07:58 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.095 05:07:58 -- scripts/common.sh@415 -- # IFS='()' 00:10:40.095 05:07:58 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:10:40.095 05:07:58 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.095 05:07:58 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:40.095 05:07:58 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.095 05:07:58 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.095 05:07:58 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:40.095 05:07:58 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:10:40.095 05:07:58 -- scripts/common.sh@422 -- # local spdk_guid 00:10:40.095 05:07:58 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:40.095 05:07:58 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.095 05:07:58 -- scripts/common.sh@427 -- # IFS='()' 00:10:40.095 05:07:58 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:10:40.095 05:07:58 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:40.095 05:07:59 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:40.095 05:07:59 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.095 05:07:59 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.095 05:07:59 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:40.095 05:07:59 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:10:41.030 The operation has completed successfully. 00:10:41.030 05:08:00 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:10:42.405 The operation has completed successfully. 00:10:42.405 05:08:01 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.340 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.340 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.340 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.340 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.598 05:08:02 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:10:43.598 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.598 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:43.598 [] 00:10:43.598 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.598 05:08:02 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:10:43.598 05:08:02 -- bdev/blockdev.sh@79 -- # local json 00:10:43.598 05:08:02 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:43.598 05:08:02 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.598 05:08:02 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:10:43.598 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.598 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:43.855 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.855 05:08:02 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:43.855 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.855 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:43.855 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.855 05:08:02 -- bdev/blockdev.sh@738 -- # cat 00:10:43.855 05:08:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:43.855 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.855 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:43.855 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:43.855 05:08:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:43.855 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:43.855 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:44.113 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.113 05:08:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:44.113 05:08:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.113 05:08:02 -- common/autotest_common.sh@10 -- # set +x 00:10:44.113 05:08:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.113 05:08:02 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:44.113 05:08:03 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:44.113 05:08:03 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:44.113 05:08:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:44.113 05:08:03 -- common/autotest_common.sh@10 -- # set +x 00:10:44.113 05:08:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:44.113 05:08:03 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:44.113 05:08:03 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:44.113 05:08:03 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1b22eb67-af3a-4e75-95f8-e7b973d911cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1b22eb67-af3a-4e75-95f8-e7b973d911cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "76932ced-4fb1-424d-93c9-56d2b356477d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "76932ced-4fb1-424d-93c9-56d2b356477d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7e34345d-ecc4-425b-b184-42702cc844dc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7e34345d-ecc4-425b-b184-42702cc844dc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "399d91d8-2419-48e5-ad2c-344f466ce735"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "399d91d8-2419-48e5-ad2c-344f466ce735",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7b032a85-e733-4706-acd2-fb6a94700f63"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7b032a85-e733-4706-acd2-fb6a94700f63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:44.113 05:08:03 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:44.113 05:08:03 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:10:44.113 05:08:03 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:44.113 05:08:03 -- bdev/blockdev.sh@752 -- # killprocess 62306 00:10:44.113 05:08:03 -- common/autotest_common.sh@926 -- # '[' -z 62306 ']' 00:10:44.113 05:08:03 -- common/autotest_common.sh@930 -- # kill -0 62306 00:10:44.113 05:08:03 -- common/autotest_common.sh@931 -- # uname 00:10:44.113 05:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:44.113 05:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62306 00:10:44.113 killing process with pid 62306 00:10:44.113 05:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:44.113 05:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:44.113 05:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62306' 00:10:44.113 05:08:03 -- common/autotest_common.sh@945 -- # kill 62306 00:10:44.113 05:08:03 -- common/autotest_common.sh@950 -- # wait 62306 00:10:47.427 05:08:05 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:47.427 05:08:05 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:47.427 05:08:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:47.427 05:08:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.427 05:08:05 -- common/autotest_common.sh@10 -- # set +x 00:10:47.427 ************************************ 00:10:47.427 START TEST bdev_hello_world 00:10:47.427 ************************************ 00:10:47.427 05:08:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:47.427 [2024-07-26 05:08:06.018428] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:47.428 [2024-07-26 05:08:06.018574] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62998 ] 00:10:47.428 [2024-07-26 05:08:06.188124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.428 [2024-07-26 05:08:06.476135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.362 [2024-07-26 05:08:07.245343] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:48.362 [2024-07-26 05:08:07.245410] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:48.362 [2024-07-26 05:08:07.245435] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:48.362 [2024-07-26 05:08:07.248809] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:48.362 [2024-07-26 05:08:07.249548] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:48.362 [2024-07-26 05:08:07.249592] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:48.362 [2024-07-26 05:08:07.249811] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:48.362 00:10:48.362 [2024-07-26 05:08:07.249837] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:49.741 00:10:49.741 real 0m2.755s 00:10:49.741 user 0m2.299s 00:10:49.741 sys 0m0.345s 00:10:49.741 05:08:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.741 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:10:49.741 ************************************ 00:10:49.741 END TEST bdev_hello_world 00:10:49.741 ************************************ 00:10:49.741 05:08:08 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:49.741 05:08:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:49.741 05:08:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.741 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:10:49.741 ************************************ 00:10:49.741 START TEST bdev_bounds 00:10:49.741 ************************************ 00:10:49.741 05:08:08 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:49.741 Process bdevio pid: 63040 00:10:49.741 05:08:08 -- bdev/blockdev.sh@288 -- # bdevio_pid=63040 00:10:49.741 05:08:08 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:49.741 05:08:08 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 63040' 00:10:49.741 05:08:08 -- bdev/blockdev.sh@291 -- # waitforlisten 63040 00:10:49.741 05:08:08 -- common/autotest_common.sh@819 -- # '[' -z 63040 ']' 00:10:49.741 05:08:08 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:49.741 05:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.741 05:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:49.741 05:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.741 05:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:49.741 05:08:08 -- common/autotest_common.sh@10 -- # set +x 00:10:50.001 [2024-07-26 05:08:08.857799] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:50.001 [2024-07-26 05:08:08.857958] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63040 ] 00:10:50.001 [2024-07-26 05:08:09.038101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.259 [2024-07-26 05:08:09.322135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.260 [2024-07-26 05:08:09.322309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.260 [2024-07-26 05:08:09.322340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.638 05:08:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:51.638 05:08:10 -- common/autotest_common.sh@852 -- # return 0 00:10:51.638 05:08:10 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:51.638 I/O targets: 00:10:51.638 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:51.638 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:51.638 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:51.638 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:51.638 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:51.638 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:51.638 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:51.638 00:10:51.638 00:10:51.638 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.638 http://cunit.sourceforge.net/ 00:10:51.638 00:10:51.638 00:10:51.638 Suite: bdevio tests on: Nvme3n1 00:10:51.638 Test: blockdev write read block ...passed 00:10:51.638 Test: blockdev write zeroes read block ...passed 00:10:51.638 Test: blockdev write zeroes read no split ...passed 00:10:51.638 Test: blockdev write zeroes read split ...passed 00:10:51.638 Test: blockdev write zeroes read split partial ...passed 00:10:51.638 Test: blockdev reset ...[2024-07-26 05:08:10.608328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:51.638 [2024-07-26 05:08:10.612828] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.638 passed 00:10:51.638 Test: blockdev write read 8 blocks ...passed 00:10:51.638 Test: blockdev write read size > 128k ...passed 00:10:51.638 Test: blockdev write read invalid size ...passed 00:10:51.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.638 Test: blockdev write read max offset ...passed 00:10:51.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.638 Test: blockdev writev readv 8 blocks ...passed 00:10:51.638 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.638 Test: blockdev writev readv block ...passed 00:10:51.638 Test: blockdev writev readv size > 128k ...passed 00:10:51.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.638 Test: blockdev comparev and writev ...[2024-07-26 05:08:10.622798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27000a000 len:0x1000 00:10:51.638 [2024-07-26 05:08:10.622857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.638 passed 00:10:51.638 Test: blockdev nvme passthru rw ...passed 00:10:51.638 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.638 Test: blockdev nvme admin passthru ...[2024-07-26 05:08:10.623856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.638 [2024-07-26 05:08:10.623893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:51.638 passed 00:10:51.638 Test: blockdev copy ...passed 00:10:51.638 Suite: bdevio tests on: Nvme2n3 00:10:51.638 Test: blockdev write read block ...passed 00:10:51.638 Test: blockdev write zeroes read block ...passed 00:10:51.638 Test: blockdev write zeroes read no split ...passed 00:10:51.638 Test: blockdev write zeroes read split ...passed 00:10:51.638 Test: blockdev write zeroes read split partial ...passed 00:10:51.638 Test: blockdev reset ...[2024-07-26 05:08:10.700309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:51.638 [2024-07-26 05:08:10.705101] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.638 passed 00:10:51.638 Test: blockdev write read 8 blocks ...passed 00:10:51.638 Test: blockdev write read size > 128k ...passed 00:10:51.638 Test: blockdev write read invalid size ...passed 00:10:51.638 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.638 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.638 Test: blockdev write read max offset ...passed 00:10:51.638 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.638 Test: blockdev writev readv 8 blocks ...passed 00:10:51.638 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.638 Test: blockdev writev readv block ...passed 00:10:51.638 Test: blockdev writev readv size > 128k ...passed 00:10:51.638 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.638 Test: blockdev comparev and writev ...[2024-07-26 05:08:10.715005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24f704000 len:0x1000 00:10:51.639 [2024-07-26 05:08:10.715185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.639 passed 00:10:51.639 Test: blockdev nvme passthru rw ...passed 00:10:51.639 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:08:10.716355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.639 [2024-07-26 05:08:10.716507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:10:51.639 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:10:51.639 passed 00:10:51.639 Test: blockdev copy ...passed 00:10:51.639 Suite: bdevio tests on: Nvme2n2 00:10:51.639 Test: blockdev write read block ...passed 00:10:51.639 Test: blockdev write zeroes read block ...passed 00:10:51.639 Test: blockdev write zeroes read no split ...passed 00:10:51.898 Test: blockdev write zeroes read split ...passed 00:10:51.898 Test: blockdev write zeroes read split partial ...passed 00:10:51.898 Test: blockdev reset ...[2024-07-26 05:08:10.791978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:51.898 [2024-07-26 05:08:10.796476] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.898 passed 00:10:51.898 Test: blockdev write read 8 blocks ...passed 00:10:51.898 Test: blockdev write read size > 128k ...passed 00:10:51.898 Test: blockdev write read invalid size ...passed 00:10:51.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.898 Test: blockdev write read max offset ...passed 00:10:51.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.898 Test: blockdev writev readv 8 blocks ...passed 00:10:51.898 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.898 Test: blockdev writev readv block ...passed 00:10:51.898 Test: blockdev writev readv size > 128k ...passed 00:10:51.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.898 Test: blockdev comparev and writev ...[2024-07-26 05:08:10.806460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24f704000 len:0x1000 00:10:51.898 [2024-07-26 05:08:10.806644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.898 passed 00:10:51.898 Test: blockdev nvme passthru rw ...passed 00:10:51.898 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:08:10.807704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.898 [2024-07-26 05:08:10.807844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:51.898 passed 00:10:51.898 Test: blockdev nvme admin passthru ...passed 00:10:51.898 Test: blockdev copy ...passed 00:10:51.898 Suite: bdevio tests on: Nvme2n1 00:10:51.898 Test: blockdev write read block ...passed 00:10:51.898 Test: blockdev write zeroes read block ...passed 00:10:51.898 Test: blockdev write zeroes read no split ...passed 00:10:51.898 Test: blockdev write zeroes read split ...passed 00:10:51.898 Test: blockdev write zeroes read split partial ...passed 00:10:51.898 Test: blockdev reset ...[2024-07-26 05:08:10.883605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:51.898 [2024-07-26 05:08:10.888064] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.898 passed 00:10:51.898 Test: blockdev write read 8 blocks ...passed 00:10:51.898 Test: blockdev write read size > 128k ...passed 00:10:51.898 Test: blockdev write read invalid size ...passed 00:10:51.898 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.898 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.898 Test: blockdev write read max offset ...passed 00:10:51.898 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.898 Test: blockdev writev readv 8 blocks ...passed 00:10:51.898 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.898 Test: blockdev writev readv block ...passed 00:10:51.898 Test: blockdev writev readv size > 128k ...passed 00:10:51.898 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.898 Test: blockdev comparev and writev ...[2024-07-26 05:08:10.897687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f23c000 len:0x1000 00:10:51.898 [2024-07-26 05:08:10.897739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.898 passed 00:10:51.898 Test: blockdev nvme passthru rw ...passed 00:10:51.898 Test: blockdev nvme passthru vendor specific ...passed 00:10:51.898 Test: blockdev nvme admin passthru ...[2024-07-26 05:08:10.898546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.898 [2024-07-26 05:08:10.898582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:51.898 passed 00:10:51.898 Test: blockdev copy ...passed 00:10:51.898 Suite: bdevio tests on: Nvme1n1 00:10:51.898 Test: blockdev write read block ...passed 00:10:51.898 Test: blockdev write zeroes read block ...passed 00:10:51.898 Test: blockdev write zeroes read no split ...passed 00:10:51.898 Test: blockdev write zeroes read split ...passed 00:10:51.898 Test: blockdev write zeroes read split partial ...passed 00:10:51.898 Test: blockdev reset ...[2024-07-26 05:08:10.974187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:51.898 [2024-07-26 05:08:10.978582] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:51.898 passed 00:10:51.899 Test: blockdev write read 8 blocks ...passed 00:10:51.899 Test: blockdev write read size > 128k ...passed 00:10:51.899 Test: blockdev write read invalid size ...passed 00:10:51.899 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.899 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.899 Test: blockdev write read max offset ...passed 00:10:51.899 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.899 Test: blockdev writev readv 8 blocks ...passed 00:10:51.899 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.899 Test: blockdev writev readv block ...passed 00:10:51.899 Test: blockdev writev readv size > 128k ...passed 00:10:51.899 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.899 Test: blockdev comparev and writev ...[2024-07-26 05:08:10.988537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27f238000 len:0x1000 00:10:51.899 [2024-07-26 05:08:10.988713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:51.899 passed 00:10:51.899 Test: blockdev nvme passthru rw ...passed 00:10:51.899 Test: blockdev nvme passthru vendor specific ...[2024-07-26 05:08:10.989968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:51.899 [2024-07-26 05:08:10.990127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:51.899 passed 00:10:51.899 Test: blockdev nvme admin passthru ...passed 00:10:51.899 Test: blockdev copy ...passed 00:10:51.899 Suite: bdevio tests on: Nvme0n1p2 00:10:51.899 Test: blockdev write read block ...passed 00:10:51.899 Test: blockdev write zeroes read block ...passed 00:10:51.899 Test: blockdev write zeroes read no split ...passed 00:10:52.158 Test: blockdev write zeroes read split ...passed 00:10:52.158 Test: blockdev write zeroes read split partial ...passed 00:10:52.158 Test: blockdev reset ...[2024-07-26 05:08:11.067987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:52.158 [2024-07-26 05:08:11.072299] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.158 passed 00:10:52.158 Test: blockdev write read 8 blocks ...passed 00:10:52.158 Test: blockdev write read size > 128k ...passed 00:10:52.158 Test: blockdev write read invalid size ...passed 00:10:52.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.158 Test: blockdev write read max offset ...passed 00:10:52.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.158 Test: blockdev writev readv 8 blocks ...passed 00:10:52.158 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.158 Test: blockdev writev readv block ...passed 00:10:52.158 Test: blockdev writev readv size > 128k ...passed 00:10:52.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.158 Test: blockdev comparev and writev ...passed 00:10:52.158 Test: blockdev nvme passthru rw ...passed 00:10:52.158 Test: blockdev nvme passthru vendor specific ...passed 00:10:52.158 Test: blockdev nvme admin passthru ...passed 00:10:52.158 Test: blockdev copy ...[2024-07-26 05:08:11.080752] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:10:52.158 separate metadata which is not supported yet. 00:10:52.158 passed 00:10:52.158 Suite: bdevio tests on: Nvme0n1p1 00:10:52.158 Test: blockdev write read block ...passed 00:10:52.158 Test: blockdev write zeroes read block ...passed 00:10:52.158 Test: blockdev write zeroes read no split ...passed 00:10:52.158 Test: blockdev write zeroes read split ...passed 00:10:52.158 Test: blockdev write zeroes read split partial ...passed 00:10:52.158 Test: blockdev reset ...[2024-07-26 05:08:11.149352] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:52.158 [2024-07-26 05:08:11.153615] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.158 passed 00:10:52.158 Test: blockdev write read 8 blocks ...passed 00:10:52.158 Test: blockdev write read size > 128k ...passed 00:10:52.158 Test: blockdev write read invalid size ...passed 00:10:52.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.158 Test: blockdev write read max offset ...passed 00:10:52.158 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.158 Test: blockdev writev readv 8 blocks ...passed 00:10:52.158 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.158 Test: blockdev writev readv block ...passed 00:10:52.158 Test: blockdev writev readv size > 128k ...passed 00:10:52.158 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.158 Test: blockdev comparev and writev ...passed 00:10:52.158 Test: blockdev nvme passthru rw ...passed 00:10:52.158 Test: blockdev nvme passthru vendor specific ...passed 00:10:52.158 Test: blockdev nvme admin passthru ...passed 00:10:52.158 Test: blockdev copy ...[2024-07-26 05:08:11.162150] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:10:52.158 separate metadata which is not supported yet. 00:10:52.158 passed 00:10:52.158 00:10:52.158 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.158 suites 7 7 n/a 0 0 00:10:52.158 tests 161 161 161 0 0 00:10:52.158 asserts 1006 1006 1006 0 n/a 00:10:52.158 00:10:52.158 Elapsed time = 1.722 seconds 00:10:52.158 0 00:10:52.158 05:08:11 -- bdev/blockdev.sh@293 -- # killprocess 63040 00:10:52.158 05:08:11 -- common/autotest_common.sh@926 -- # '[' -z 63040 ']' 00:10:52.158 05:08:11 -- common/autotest_common.sh@930 -- # kill -0 63040 00:10:52.158 05:08:11 -- common/autotest_common.sh@931 -- # uname 00:10:52.158 05:08:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:52.158 05:08:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63040 00:10:52.158 05:08:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:52.158 05:08:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:52.158 05:08:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63040' 00:10:52.158 killing process with pid 63040 00:10:52.158 05:08:11 -- common/autotest_common.sh@945 -- # kill 63040 00:10:52.158 05:08:11 -- common/autotest_common.sh@950 -- # wait 63040 00:10:53.537 05:08:12 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:53.537 00:10:53.537 real 0m3.685s 00:10:53.537 user 0m9.161s 00:10:53.537 sys 0m0.566s 00:10:53.537 05:08:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.537 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:10:53.537 ************************************ 00:10:53.537 END TEST bdev_bounds 00:10:53.537 ************************************ 00:10:53.537 05:08:12 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:53.537 05:08:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:53.537 05:08:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.537 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:10:53.537 ************************************ 00:10:53.537 START TEST bdev_nbd 00:10:53.537 ************************************ 00:10:53.537 05:08:12 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:53.537 05:08:12 -- bdev/blockdev.sh@298 -- # uname -s 00:10:53.537 05:08:12 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:53.537 05:08:12 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.537 05:08:12 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:53.537 05:08:12 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:53.537 05:08:12 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:53.537 05:08:12 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:10:53.537 05:08:12 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:53.537 05:08:12 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:53.537 05:08:12 -- bdev/blockdev.sh@309 -- # local nbd_all 00:10:53.537 05:08:12 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:10:53.537 05:08:12 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:53.537 05:08:12 -- bdev/blockdev.sh@312 -- # local nbd_list 00:10:53.537 05:08:12 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:53.537 05:08:12 -- bdev/blockdev.sh@313 -- # local bdev_list 00:10:53.537 05:08:12 -- bdev/blockdev.sh@316 -- # nbd_pid=63118 00:10:53.537 05:08:12 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:53.537 05:08:12 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:53.537 05:08:12 -- bdev/blockdev.sh@318 -- # waitforlisten 63118 /var/tmp/spdk-nbd.sock 00:10:53.537 05:08:12 -- common/autotest_common.sh@819 -- # '[' -z 63118 ']' 00:10:53.537 05:08:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:53.537 05:08:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:53.537 05:08:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:53.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:53.537 05:08:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:53.537 05:08:12 -- common/autotest_common.sh@10 -- # set +x 00:10:53.537 [2024-07-26 05:08:12.619765] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:53.537 [2024-07-26 05:08:12.619916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.796 [2024-07-26 05:08:12.802023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.055 [2024-07-26 05:08:13.090440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.992 05:08:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.992 05:08:14 -- common/autotest_common.sh@852 -- # return 0 00:10:54.992 05:08:14 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@24 -- # local i 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:54.992 05:08:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:55.252 05:08:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:55.252 05:08:14 -- common/autotest_common.sh@857 -- # local i 00:10:55.252 05:08:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.252 05:08:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.252 05:08:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:55.252 05:08:14 -- common/autotest_common.sh@861 -- # break 00:10:55.252 05:08:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.252 05:08:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.252 05:08:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.252 1+0 records in 00:10:55.252 1+0 records out 00:10:55.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908067 s, 4.5 MB/s 00:10:55.252 05:08:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.252 05:08:14 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.252 05:08:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.252 05:08:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.252 05:08:14 -- common/autotest_common.sh@877 -- # return 0 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.252 05:08:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:55.511 05:08:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:55.512 05:08:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:55.512 05:08:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:55.512 05:08:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:55.512 05:08:14 -- common/autotest_common.sh@857 -- # local i 00:10:55.512 05:08:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.512 05:08:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.512 05:08:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:55.512 05:08:14 -- common/autotest_common.sh@861 -- # break 00:10:55.512 05:08:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.512 05:08:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.512 05:08:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.512 1+0 records in 00:10:55.512 1+0 records out 00:10:55.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565069 s, 7.2 MB/s 00:10:55.512 05:08:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.512 05:08:14 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.512 05:08:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.512 05:08:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.512 05:08:14 -- common/autotest_common.sh@877 -- # return 0 00:10:55.512 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.512 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.512 05:08:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:55.771 05:08:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:10:55.771 05:08:14 -- common/autotest_common.sh@857 -- # local i 00:10:55.771 05:08:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:55.771 05:08:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:55.771 05:08:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:10:55.771 05:08:14 -- common/autotest_common.sh@861 -- # break 00:10:55.771 05:08:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:55.771 05:08:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:55.771 05:08:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.771 1+0 records in 00:10:55.771 1+0 records out 00:10:55.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526741 s, 7.8 MB/s 00:10:55.771 05:08:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.771 05:08:14 -- common/autotest_common.sh@874 -- # size=4096 00:10:55.771 05:08:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.771 05:08:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:55.771 05:08:14 -- common/autotest_common.sh@877 -- # return 0 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:55.771 05:08:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:56.030 05:08:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:10:56.030 05:08:15 -- common/autotest_common.sh@857 -- # local i 00:10:56.030 05:08:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:56.030 05:08:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:56.030 05:08:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:10:56.030 05:08:15 -- common/autotest_common.sh@861 -- # break 00:10:56.030 05:08:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:56.030 05:08:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:56.030 05:08:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.030 1+0 records in 00:10:56.030 1+0 records out 00:10:56.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819026 s, 5.0 MB/s 00:10:56.030 05:08:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.030 05:08:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:56.030 05:08:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.030 05:08:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:56.030 05:08:15 -- common/autotest_common.sh@877 -- # return 0 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.030 05:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:56.291 05:08:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:10:56.291 05:08:15 -- common/autotest_common.sh@857 -- # local i 00:10:56.291 05:08:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:56.291 05:08:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:56.291 05:08:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:10:56.291 05:08:15 -- common/autotest_common.sh@861 -- # break 00:10:56.291 05:08:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:56.291 05:08:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:56.291 05:08:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.291 1+0 records in 00:10:56.291 1+0 records out 00:10:56.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623968 s, 6.6 MB/s 00:10:56.291 05:08:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.291 05:08:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:56.291 05:08:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.291 05:08:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:56.291 05:08:15 -- common/autotest_common.sh@877 -- # return 0 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.291 05:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:56.611 05:08:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:10:56.611 05:08:15 -- common/autotest_common.sh@857 -- # local i 00:10:56.611 05:08:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:56.611 05:08:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:56.611 05:08:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:10:56.611 05:08:15 -- common/autotest_common.sh@861 -- # break 00:10:56.611 05:08:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:56.611 05:08:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:56.611 05:08:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.611 1+0 records in 00:10:56.611 1+0 records out 00:10:56.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716892 s, 5.7 MB/s 00:10:56.611 05:08:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.611 05:08:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:56.611 05:08:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.611 05:08:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:56.611 05:08:15 -- common/autotest_common.sh@877 -- # return 0 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.611 05:08:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:56.870 05:08:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:56.870 05:08:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:56.870 05:08:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:56.870 05:08:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:10:56.870 05:08:15 -- common/autotest_common.sh@857 -- # local i 00:10:56.870 05:08:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:56.870 05:08:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:56.870 05:08:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:10:56.870 05:08:15 -- common/autotest_common.sh@861 -- # break 00:10:56.870 05:08:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:56.871 05:08:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:56.871 05:08:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.871 1+0 records in 00:10:56.871 1+0 records out 00:10:56.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143064 s, 2.9 MB/s 00:10:56.871 05:08:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.871 05:08:15 -- common/autotest_common.sh@874 -- # size=4096 00:10:56.871 05:08:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.871 05:08:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:56.871 05:08:15 -- common/autotest_common.sh@877 -- # return 0 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd0", 00:10:56.871 "bdev_name": "Nvme0n1p1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd1", 00:10:56.871 "bdev_name": "Nvme0n1p2" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd2", 00:10:56.871 "bdev_name": "Nvme1n1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd3", 00:10:56.871 "bdev_name": "Nvme2n1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd4", 00:10:56.871 "bdev_name": "Nvme2n2" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd5", 00:10:56.871 "bdev_name": "Nvme2n3" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd6", 00:10:56.871 "bdev_name": "Nvme3n1" 00:10:56.871 } 00:10:56.871 ]' 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:56.871 05:08:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd0", 00:10:56.871 "bdev_name": "Nvme0n1p1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd1", 00:10:56.871 "bdev_name": "Nvme0n1p2" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd2", 00:10:56.871 "bdev_name": "Nvme1n1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd3", 00:10:56.871 "bdev_name": "Nvme2n1" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd4", 00:10:56.871 "bdev_name": "Nvme2n2" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd5", 00:10:56.871 "bdev_name": "Nvme2n3" 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "nbd_device": "/dev/nbd6", 00:10:56.871 "bdev_name": "Nvme3n1" 00:10:56.871 } 00:10:56.871 ]' 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@51 -- # local i 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.130 05:08:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@41 -- # break 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.130 05:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@41 -- # break 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.389 05:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@41 -- # break 00:10:57.647 05:08:16 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.648 05:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.648 05:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:57.906 05:08:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:57.906 05:08:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:57.906 05:08:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:57.906 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.906 05:08:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.907 05:08:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:57.907 05:08:16 -- bdev/nbd_common.sh@41 -- # break 00:10:57.907 05:08:16 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.907 05:08:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.907 05:08:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@41 -- # break 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:57.907 05:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@41 -- # break 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:58.165 05:08:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:58.424 05:08:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@41 -- # break 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.425 05:08:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@65 -- # true 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@65 -- # count=0 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@122 -- # count=0 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@127 -- # return 0 00:10:58.684 05:08:17 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@12 -- # local i 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.684 05:08:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:58.943 /dev/nbd0 00:10:58.944 05:08:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:58.944 05:08:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:58.944 05:08:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:58.944 05:08:17 -- common/autotest_common.sh@857 -- # local i 00:10:58.944 05:08:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:58.944 05:08:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:58.944 05:08:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:58.944 05:08:17 -- common/autotest_common.sh@861 -- # break 00:10:58.944 05:08:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:58.944 05:08:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:58.944 05:08:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.944 1+0 records in 00:10:58.944 1+0 records out 00:10:58.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631627 s, 6.5 MB/s 00:10:58.944 05:08:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.944 05:08:17 -- common/autotest_common.sh@874 -- # size=4096 00:10:58.944 05:08:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.944 05:08:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:58.944 05:08:17 -- common/autotest_common.sh@877 -- # return 0 00:10:58.944 05:08:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:58.944 05:08:17 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:58.944 05:08:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:59.203 /dev/nbd1 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:59.203 05:08:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:59.203 05:08:18 -- common/autotest_common.sh@857 -- # local i 00:10:59.203 05:08:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:59.203 05:08:18 -- common/autotest_common.sh@861 -- # break 00:10:59.203 05:08:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.203 1+0 records in 00:10:59.203 1+0 records out 00:10:59.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624585 s, 6.6 MB/s 00:10:59.203 05:08:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.203 05:08:18 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.203 05:08:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.203 05:08:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.203 05:08:18 -- common/autotest_common.sh@877 -- # return 0 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:59.203 /dev/nbd10 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:59.203 05:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:59.203 05:08:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:10:59.203 05:08:18 -- common/autotest_common.sh@857 -- # local i 00:10:59.203 05:08:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:10:59.203 05:08:18 -- common/autotest_common.sh@861 -- # break 00:10:59.203 05:08:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.203 05:08:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.203 1+0 records in 00:10:59.203 1+0 records out 00:10:59.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619242 s, 6.6 MB/s 00:10:59.203 05:08:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.203 05:08:18 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.203 05:08:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.463 05:08:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.463 05:08:18 -- common/autotest_common.sh@877 -- # return 0 00:10:59.463 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.463 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.463 05:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:59.463 /dev/nbd11 00:10:59.463 05:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:59.463 05:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:59.463 05:08:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:10:59.463 05:08:18 -- common/autotest_common.sh@857 -- # local i 00:10:59.463 05:08:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.463 05:08:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.463 05:08:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:10:59.463 05:08:18 -- common/autotest_common.sh@861 -- # break 00:10:59.463 05:08:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.463 05:08:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.463 05:08:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.463 1+0 records in 00:10:59.463 1+0 records out 00:10:59.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699505 s, 5.9 MB/s 00:10:59.463 05:08:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.722 05:08:18 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.722 05:08:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.722 05:08:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.722 05:08:18 -- common/autotest_common.sh@877 -- # return 0 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:59.722 /dev/nbd12 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:59.722 05:08:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:10:59.722 05:08:18 -- common/autotest_common.sh@857 -- # local i 00:10:59.722 05:08:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.722 05:08:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.722 05:08:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:10:59.722 05:08:18 -- common/autotest_common.sh@861 -- # break 00:10:59.722 05:08:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.722 05:08:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.722 05:08:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.722 1+0 records in 00:10:59.722 1+0 records out 00:10:59.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442623 s, 9.3 MB/s 00:10:59.722 05:08:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.722 05:08:18 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.722 05:08:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.722 05:08:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.722 05:08:18 -- common/autotest_common.sh@877 -- # return 0 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.722 05:08:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:59.981 /dev/nbd13 00:10:59.981 05:08:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:59.981 05:08:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:59.981 05:08:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:10:59.981 05:08:19 -- common/autotest_common.sh@857 -- # local i 00:10:59.981 05:08:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:59.981 05:08:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:59.981 05:08:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:10:59.981 05:08:19 -- common/autotest_common.sh@861 -- # break 00:10:59.981 05:08:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:59.981 05:08:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:59.981 05:08:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.981 1+0 records in 00:10:59.981 1+0 records out 00:10:59.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571605 s, 7.2 MB/s 00:10:59.981 05:08:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.981 05:08:19 -- common/autotest_common.sh@874 -- # size=4096 00:10:59.981 05:08:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.981 05:08:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:59.981 05:08:19 -- common/autotest_common.sh@877 -- # return 0 00:10:59.981 05:08:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.981 05:08:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:59.981 05:08:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:00.240 /dev/nbd14 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:00.240 05:08:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:00.240 05:08:19 -- common/autotest_common.sh@857 -- # local i 00:11:00.240 05:08:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:00.240 05:08:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:00.240 05:08:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:00.240 05:08:19 -- common/autotest_common.sh@861 -- # break 00:11:00.240 05:08:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:00.240 05:08:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:00.240 05:08:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:00.240 1+0 records in 00:11:00.240 1+0 records out 00:11:00.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886907 s, 4.6 MB/s 00:11:00.240 05:08:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.240 05:08:19 -- common/autotest_common.sh@874 -- # size=4096 00:11:00.240 05:08:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:00.240 05:08:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:00.240 05:08:19 -- common/autotest_common.sh@877 -- # return 0 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.240 05:08:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd0", 00:11:00.499 "bdev_name": "Nvme0n1p1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd1", 00:11:00.499 "bdev_name": "Nvme0n1p2" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd10", 00:11:00.499 "bdev_name": "Nvme1n1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd11", 00:11:00.499 "bdev_name": "Nvme2n1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd12", 00:11:00.499 "bdev_name": "Nvme2n2" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd13", 00:11:00.499 "bdev_name": "Nvme2n3" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd14", 00:11:00.499 "bdev_name": "Nvme3n1" 00:11:00.499 } 00:11:00.499 ]' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd0", 00:11:00.499 "bdev_name": "Nvme0n1p1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd1", 00:11:00.499 "bdev_name": "Nvme0n1p2" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd10", 00:11:00.499 "bdev_name": "Nvme1n1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd11", 00:11:00.499 "bdev_name": "Nvme2n1" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd12", 00:11:00.499 "bdev_name": "Nvme2n2" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd13", 00:11:00.499 "bdev_name": "Nvme2n3" 00:11:00.499 }, 00:11:00.499 { 00:11:00.499 "nbd_device": "/dev/nbd14", 00:11:00.499 "bdev_name": "Nvme3n1" 00:11:00.499 } 00:11:00.499 ]' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:00.499 /dev/nbd1 00:11:00.499 /dev/nbd10 00:11:00.499 /dev/nbd11 00:11:00.499 /dev/nbd12 00:11:00.499 /dev/nbd13 00:11:00.499 /dev/nbd14' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:00.499 /dev/nbd1 00:11:00.499 /dev/nbd10 00:11:00.499 /dev/nbd11 00:11:00.499 /dev/nbd12 00:11:00.499 /dev/nbd13 00:11:00.499 /dev/nbd14' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@65 -- # count=7 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@66 -- # echo 7 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@95 -- # count=7 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:00.499 256+0 records in 00:11:00.499 256+0 records out 00:11:00.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00887856 s, 118 MB/s 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.499 05:08:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:00.758 256+0 records in 00:11:00.758 256+0 records out 00:11:00.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143723 s, 7.3 MB/s 00:11:00.758 05:08:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.758 05:08:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:01.017 256+0 records in 00:11:01.017 256+0 records out 00:11:01.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148661 s, 7.1 MB/s 00:11:01.017 05:08:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.017 05:08:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:01.017 256+0 records in 00:11:01.017 256+0 records out 00:11:01.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143501 s, 7.3 MB/s 00:11:01.017 05:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.017 05:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:01.276 256+0 records in 00:11:01.276 256+0 records out 00:11:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148473 s, 7.1 MB/s 00:11:01.276 05:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.276 05:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:01.276 256+0 records in 00:11:01.276 256+0 records out 00:11:01.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146753 s, 7.1 MB/s 00:11:01.276 05:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.276 05:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:01.535 256+0 records in 00:11:01.535 256+0 records out 00:11:01.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146923 s, 7.1 MB/s 00:11:01.535 05:08:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:01.535 05:08:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:01.535 256+0 records in 00:11:01.535 256+0 records out 00:11:01.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14903 s, 7.0 MB/s 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@51 -- # local i 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.794 05:08:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@41 -- # break 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.053 05:08:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@41 -- # break 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.312 05:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@41 -- # break 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.571 05:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@41 -- # break 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.830 05:08:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:03.088 05:08:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@41 -- # break 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.088 05:08:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@41 -- # break 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@41 -- # break 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.352 05:08:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@65 -- # true 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@65 -- # count=0 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@104 -- # count=0 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@109 -- # return 0 00:11:03.612 05:08:22 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:03.612 05:08:22 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:03.871 malloc_lvol_verify 00:11:03.871 05:08:22 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:04.130 1746cb81-b6ac-402e-911c-70a08440e910 00:11:04.130 05:08:23 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:04.391 5b6a103e-b445-406a-8788-80311273edd1 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:04.391 /dev/nbd0 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:04.391 mke2fs 1.46.5 (30-Dec-2021) 00:11:04.391 Discarding device blocks: 0/4096 done 00:11:04.391 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:04.391 00:11:04.391 Allocating group tables: 0/1 done 00:11:04.391 Writing inode tables: 0/1 done 00:11:04.391 Creating journal (1024 blocks): done 00:11:04.391 Writing superblocks and filesystem accounting information: 0/1 done 00:11:04.391 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@51 -- # local i 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:04.391 05:08:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@41 -- # break 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@45 -- # return 0 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:04.650 05:08:23 -- bdev/nbd_common.sh@147 -- # return 0 00:11:04.650 05:08:23 -- bdev/blockdev.sh@324 -- # killprocess 63118 00:11:04.650 05:08:23 -- common/autotest_common.sh@926 -- # '[' -z 63118 ']' 00:11:04.650 05:08:23 -- common/autotest_common.sh@930 -- # kill -0 63118 00:11:04.650 05:08:23 -- common/autotest_common.sh@931 -- # uname 00:11:04.650 05:08:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:04.650 05:08:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63118 00:11:04.650 05:08:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:04.650 killing process with pid 63118 00:11:04.650 05:08:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:04.650 05:08:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63118' 00:11:04.650 05:08:23 -- common/autotest_common.sh@945 -- # kill 63118 00:11:04.650 05:08:23 -- common/autotest_common.sh@950 -- # wait 63118 00:11:06.554 05:08:25 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:06.554 00:11:06.554 real 0m12.754s 00:11:06.554 user 0m16.377s 00:11:06.554 sys 0m4.904s 00:11:06.554 05:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.554 05:08:25 -- common/autotest_common.sh@10 -- # set +x 00:11:06.554 ************************************ 00:11:06.554 END TEST bdev_nbd 00:11:06.554 ************************************ 00:11:06.554 05:08:25 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:06.554 05:08:25 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:11:06.554 05:08:25 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:11:06.554 skipping fio tests on NVMe due to multi-ns failures. 00:11:06.554 05:08:25 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:06.554 05:08:25 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:06.554 05:08:25 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:06.554 05:08:25 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:06.554 05:08:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.554 05:08:25 -- common/autotest_common.sh@10 -- # set +x 00:11:06.554 ************************************ 00:11:06.554 START TEST bdev_verify 00:11:06.554 ************************************ 00:11:06.554 05:08:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:06.554 [2024-07-26 05:08:25.410807] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:06.554 [2024-07-26 05:08:25.410957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:11:06.554 [2024-07-26 05:08:25.578188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:06.813 [2024-07-26 05:08:25.866081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.813 [2024-07-26 05:08:25.866117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.750 Running I/O for 5 seconds... 00:11:13.019 00:11:13.019 Latency(us) 00:11:13.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.019 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x5e800 00:11:13.019 Nvme0n1p1 : 5.06 1784.80 6.97 0.00 0.00 71424.77 11983.73 67907.78 00:11:13.019 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x5e800 length 0x5e800 00:11:13.019 Nvme0n1p1 : 5.08 1795.65 7.01 0.00 0.00 70650.39 1693.01 68407.10 00:11:13.019 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x5e7ff 00:11:13.019 Nvme0n1p2 : 5.06 1784.07 6.97 0.00 0.00 71359.58 12358.22 64911.85 00:11:13.019 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:11:13.019 Nvme0n1p2 : 5.08 1794.14 7.01 0.00 0.00 70608.05 5118.05 70404.39 00:11:13.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0xa0000 00:11:13.019 Nvme1n1 : 5.07 1781.94 6.96 0.00 0.00 71344.80 15978.30 62914.56 00:11:13.019 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0xa0000 length 0xa0000 00:11:13.019 Nvme1n1 : 5.06 1792.72 7.00 0.00 0.00 71162.74 9424.70 68407.10 00:11:13.019 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x80000 00:11:13.019 Nvme2n1 : 5.08 1787.08 6.98 0.00 0.00 71162.56 3167.57 61915.92 00:11:13.019 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x80000 length 0x80000 00:11:13.019 Nvme2n1 : 5.07 1790.59 6.99 0.00 0.00 71144.03 13294.45 67907.78 00:11:13.019 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x80000 00:11:13.019 Nvme2n2 : 5.08 1786.26 6.98 0.00 0.00 71102.35 4244.24 59419.31 00:11:13.019 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x80000 length 0x80000 00:11:13.019 Nvme2n2 : 5.07 1788.37 6.99 0.00 0.00 71041.59 17850.76 65411.17 00:11:13.019 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x80000 00:11:13.019 Nvme2n3 : 5.08 1785.73 6.98 0.00 0.00 71039.19 4962.01 56922.70 00:11:13.019 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x80000 length 0x80000 00:11:13.019 Nvme2n3 : 5.07 1787.89 6.98 0.00 0.00 70977.92 17601.10 66409.81 00:11:13.019 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x0 length 0x20000 00:11:13.019 Nvme3n1 : 5.08 1784.27 6.97 0.00 0.00 70989.99 8301.23 57172.36 00:11:13.019 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:13.019 Verification LBA range: start 0x20000 length 0x20000 00:11:13.019 Nvme3n1 : 5.08 1787.31 6.98 0.00 0.00 70907.27 17725.93 66909.14 00:11:13.019 =================================================================================================================== 00:11:13.019 Total : 25030.83 97.78 0.00 0.00 71064.70 1693.01 70404.39 00:11:15.577 00:11:15.577 real 0m8.888s 00:11:15.577 user 0m16.113s 00:11:15.577 sys 0m0.408s 00:11:15.577 05:08:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.577 ************************************ 00:11:15.577 05:08:34 -- common/autotest_common.sh@10 -- # set +x 00:11:15.577 END TEST bdev_verify 00:11:15.577 ************************************ 00:11:15.577 05:08:34 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:15.577 05:08:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:15.577 05:08:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:15.577 05:08:34 -- common/autotest_common.sh@10 -- # set +x 00:11:15.577 ************************************ 00:11:15.577 START TEST bdev_verify_big_io 00:11:15.577 ************************************ 00:11:15.577 05:08:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:15.577 [2024-07-26 05:08:34.362429] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:15.577 [2024-07-26 05:08:34.362573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63666 ] 00:11:15.578 [2024-07-26 05:08:34.529999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.837 [2024-07-26 05:08:34.823368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.837 [2024-07-26 05:08:34.823393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.775 Running I/O for 5 seconds... 00:11:23.344 00:11:23.344 Latency(us) 00:11:23.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:23.344 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x5e80 00:11:23.344 Nvme0n1p1 : 5.58 164.76 10.30 0.00 0.00 738289.00 211712.49 1046578.71 00:11:23.344 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x5e80 length 0x5e80 00:11:23.344 Nvme0n1p1 : 5.60 164.15 10.26 0.00 0.00 747138.79 157785.72 1110491.92 00:11:23.344 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x5e7f 00:11:23.344 Nvme0n1p2 : 5.63 170.74 10.67 0.00 0.00 712972.64 51679.82 970681.78 00:11:23.344 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x5e7f length 0x5e7f 00:11:23.344 Nvme0n1p2 : 5.60 164.10 10.26 0.00 0.00 732358.53 157785.72 998643.81 00:11:23.344 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0xa000 00:11:23.344 Nvme1n1 : 5.64 179.30 11.21 0.00 0.00 679087.50 7146.54 894784.85 00:11:23.344 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0xa000 length 0xa000 00:11:23.344 Nvme1n1 : 5.65 170.38 10.65 0.00 0.00 702250.97 42442.36 914757.73 00:11:23.344 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x8000 00:11:23.344 Nvme2n1 : 5.65 179.21 11.20 0.00 0.00 666832.58 7770.70 806904.20 00:11:23.344 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x8000 length 0x8000 00:11:23.344 Nvme2n1 : 5.66 178.79 11.17 0.00 0.00 664761.82 11172.33 826877.07 00:11:23.344 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x8000 00:11:23.344 Nvme2n2 : 5.65 179.14 11.20 0.00 0.00 654794.50 8613.30 774947.60 00:11:23.344 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x8000 length 0x8000 00:11:23.344 Nvme2n2 : 5.66 178.72 11.17 0.00 0.00 651429.94 11796.48 806904.20 00:11:23.344 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x8000 00:11:23.344 Nvme2n3 : 5.66 186.19 11.64 0.00 0.00 620229.99 2621.44 782936.75 00:11:23.344 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x8000 length 0x8000 00:11:23.344 Nvme2n3 : 5.67 185.55 11.60 0.00 0.00 615578.02 9736.78 1134459.37 00:11:23.344 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x0 length 0x2000 00:11:23.344 Nvme3n1 : 5.66 186.13 11.63 0.00 0.00 607954.00 2886.70 790925.90 00:11:23.344 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:23.344 Verification LBA range: start 0x2000 length 0x2000 00:11:23.344 Nvme3n1 : 5.69 198.34 12.40 0.00 0.00 563811.59 6740.85 1334188.13 00:11:23.344 =================================================================================================================== 00:11:23.344 Total : 2485.51 155.34 0.00 0.00 665359.23 2621.44 1334188.13 00:11:24.725 00:11:24.725 real 0m9.227s 00:11:24.725 user 0m16.768s 00:11:24.725 sys 0m0.479s 00:11:24.725 05:08:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.725 05:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:24.725 ************************************ 00:11:24.725 END TEST bdev_verify_big_io 00:11:24.725 ************************************ 00:11:24.725 05:08:43 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:24.725 05:08:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:24.725 05:08:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.725 05:08:43 -- common/autotest_common.sh@10 -- # set +x 00:11:24.725 ************************************ 00:11:24.725 START TEST bdev_write_zeroes 00:11:24.725 ************************************ 00:11:24.725 05:08:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:24.725 [2024-07-26 05:08:43.631993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:24.725 [2024-07-26 05:08:43.632110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63781 ] 00:11:24.725 [2024-07-26 05:08:43.791623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.000 [2024-07-26 05:08:44.016891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.933 Running I/O for 1 seconds... 00:11:26.868 00:11:26.868 Latency(us) 00:11:26.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.869 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme0n1p1 : 1.01 8837.27 34.52 0.00 0.00 14428.31 11796.48 28711.01 00:11:26.869 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme0n1p2 : 1.02 8826.11 34.48 0.00 0.00 14425.95 11983.73 28835.84 00:11:26.869 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme1n1 : 1.02 8858.51 34.60 0.00 0.00 14353.65 8800.55 26464.06 00:11:26.869 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme2n1 : 1.02 8848.57 34.56 0.00 0.00 14319.95 9112.62 24092.28 00:11:26.869 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme2n2 : 1.02 8840.03 34.53 0.00 0.00 14294.50 9112.62 23468.13 00:11:26.869 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme2n3 : 1.02 8888.61 34.72 0.00 0.00 14208.29 5086.84 21720.50 00:11:26.869 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:26.869 Nvme3n1 : 1.02 8879.89 34.69 0.00 0.00 14186.74 5242.88 20971.52 00:11:26.869 =================================================================================================================== 00:11:26.869 Total : 61978.99 242.11 0.00 0.00 14316.31 5086.84 28835.84 00:11:28.248 00:11:28.248 real 0m3.544s 00:11:28.248 user 0m3.176s 00:11:28.248 sys 0m0.253s 00:11:28.248 05:08:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.248 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:28.248 ************************************ 00:11:28.248 END TEST bdev_write_zeroes 00:11:28.248 ************************************ 00:11:28.248 05:08:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:28.248 05:08:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:28.248 05:08:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.248 05:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:28.248 ************************************ 00:11:28.248 START TEST bdev_json_nonenclosed 00:11:28.248 ************************************ 00:11:28.248 05:08:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:28.248 [2024-07-26 05:08:47.280294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:28.248 [2024-07-26 05:08:47.280475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63840 ] 00:11:28.506 [2024-07-26 05:08:47.466547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.764 [2024-07-26 05:08:47.698862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.764 [2024-07-26 05:08:47.699057] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:28.764 [2024-07-26 05:08:47.699079] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:29.332 00:11:29.332 real 0m0.975s 00:11:29.332 user 0m0.702s 00:11:29.332 sys 0m0.167s 00:11:29.332 ************************************ 00:11:29.332 END TEST bdev_json_nonenclosed 00:11:29.332 ************************************ 00:11:29.332 05:08:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.332 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:11:29.332 05:08:48 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.332 05:08:48 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:29.332 05:08:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.332 05:08:48 -- common/autotest_common.sh@10 -- # set +x 00:11:29.332 ************************************ 00:11:29.332 START TEST bdev_json_nonarray 00:11:29.332 ************************************ 00:11:29.332 05:08:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:29.332 [2024-07-26 05:08:48.317958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:29.332 [2024-07-26 05:08:48.318136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63871 ] 00:11:29.591 [2024-07-26 05:08:48.506055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.850 [2024-07-26 05:08:48.745554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.850 [2024-07-26 05:08:48.745728] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:29.850 [2024-07-26 05:08:48.745760] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.109 00:11:30.109 real 0m0.992s 00:11:30.109 user 0m0.710s 00:11:30.109 sys 0m0.175s 00:11:30.109 05:08:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.109 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.109 ************************************ 00:11:30.109 END TEST bdev_json_nonarray 00:11:30.109 ************************************ 00:11:30.368 05:08:49 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:11:30.368 05:08:49 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:11:30.368 05:08:49 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:30.368 05:08:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:30.368 05:08:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:30.368 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 ************************************ 00:11:30.368 START TEST bdev_gpt_uuid 00:11:30.368 ************************************ 00:11:30.368 05:08:49 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:11:30.368 05:08:49 -- bdev/blockdev.sh@612 -- # local bdev 00:11:30.368 05:08:49 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:11:30.368 05:08:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=63902 00:11:30.368 05:08:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:30.368 05:08:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:30.368 05:08:49 -- bdev/blockdev.sh@47 -- # waitforlisten 63902 00:11:30.368 05:08:49 -- common/autotest_common.sh@819 -- # '[' -z 63902 ']' 00:11:30.368 05:08:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.368 05:08:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:30.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.368 05:08:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.368 05:08:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:30.368 05:08:49 -- common/autotest_common.sh@10 -- # set +x 00:11:30.368 [2024-07-26 05:08:49.384698] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:30.368 [2024-07-26 05:08:49.384866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63902 ] 00:11:30.627 [2024-07-26 05:08:49.566686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.886 [2024-07-26 05:08:49.806256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:30.886 [2024-07-26 05:08:49.806456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.823 05:08:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:31.823 05:08:50 -- common/autotest_common.sh@852 -- # return 0 00:11:31.823 05:08:50 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:31.823 05:08:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:31.823 05:08:50 -- common/autotest_common.sh@10 -- # set +x 00:11:32.389 Some configs were skipped because the RPC state that can call them passed over. 00:11:32.389 05:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:11:32.389 05:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.389 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.389 05:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:32.389 05:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.389 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.389 05:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@619 -- # bdev='[ 00:11:32.389 { 00:11:32.389 "name": "Nvme0n1p1", 00:11:32.389 "aliases": [ 00:11:32.389 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:32.389 ], 00:11:32.389 "product_name": "GPT Disk", 00:11:32.389 "block_size": 4096, 00:11:32.389 "num_blocks": 774144, 00:11:32.389 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:32.389 "md_size": 64, 00:11:32.389 "md_interleave": false, 00:11:32.389 "dif_type": 0, 00:11:32.389 "assigned_rate_limits": { 00:11:32.389 "rw_ios_per_sec": 0, 00:11:32.389 "rw_mbytes_per_sec": 0, 00:11:32.389 "r_mbytes_per_sec": 0, 00:11:32.389 "w_mbytes_per_sec": 0 00:11:32.389 }, 00:11:32.389 "claimed": false, 00:11:32.389 "zoned": false, 00:11:32.389 "supported_io_types": { 00:11:32.389 "read": true, 00:11:32.389 "write": true, 00:11:32.389 "unmap": true, 00:11:32.389 "write_zeroes": true, 00:11:32.389 "flush": true, 00:11:32.389 "reset": true, 00:11:32.389 "compare": true, 00:11:32.389 "compare_and_write": false, 00:11:32.389 "abort": true, 00:11:32.389 "nvme_admin": false, 00:11:32.389 "nvme_io": false 00:11:32.389 }, 00:11:32.389 "driver_specific": { 00:11:32.389 "gpt": { 00:11:32.389 "base_bdev": "Nvme0n1", 00:11:32.389 "offset_blocks": 256, 00:11:32.389 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:32.389 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:32.389 "partition_name": "SPDK_TEST_first" 00:11:32.389 } 00:11:32.389 } 00:11:32.389 } 00:11:32.389 ]' 00:11:32.389 05:08:51 -- bdev/blockdev.sh@620 -- # jq -r length 00:11:32.389 05:08:51 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:11:32.389 05:08:51 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:32.389 05:08:51 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:32.389 05:08:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.389 05:08:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.389 05:08:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@624 -- # bdev='[ 00:11:32.389 { 00:11:32.389 "name": "Nvme0n1p2", 00:11:32.389 "aliases": [ 00:11:32.389 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:32.389 ], 00:11:32.389 "product_name": "GPT Disk", 00:11:32.389 "block_size": 4096, 00:11:32.389 "num_blocks": 774143, 00:11:32.389 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:32.389 "md_size": 64, 00:11:32.389 "md_interleave": false, 00:11:32.389 "dif_type": 0, 00:11:32.389 "assigned_rate_limits": { 00:11:32.389 "rw_ios_per_sec": 0, 00:11:32.389 "rw_mbytes_per_sec": 0, 00:11:32.389 "r_mbytes_per_sec": 0, 00:11:32.389 "w_mbytes_per_sec": 0 00:11:32.389 }, 00:11:32.389 "claimed": false, 00:11:32.389 "zoned": false, 00:11:32.389 "supported_io_types": { 00:11:32.389 "read": true, 00:11:32.389 "write": true, 00:11:32.389 "unmap": true, 00:11:32.389 "write_zeroes": true, 00:11:32.389 "flush": true, 00:11:32.389 "reset": true, 00:11:32.389 "compare": true, 00:11:32.389 "compare_and_write": false, 00:11:32.389 "abort": true, 00:11:32.389 "nvme_admin": false, 00:11:32.389 "nvme_io": false 00:11:32.389 }, 00:11:32.389 "driver_specific": { 00:11:32.389 "gpt": { 00:11:32.389 "base_bdev": "Nvme0n1", 00:11:32.389 "offset_blocks": 774400, 00:11:32.389 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:32.389 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:32.389 "partition_name": "SPDK_TEST_second" 00:11:32.389 } 00:11:32.389 } 00:11:32.389 } 00:11:32.389 ]' 00:11:32.389 05:08:51 -- bdev/blockdev.sh@625 -- # jq -r length 00:11:32.389 05:08:51 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:11:32.389 05:08:51 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:11:32.647 05:08:51 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:32.647 05:08:51 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:32.647 05:08:51 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:32.647 05:08:51 -- bdev/blockdev.sh@629 -- # killprocess 63902 00:11:32.647 05:08:51 -- common/autotest_common.sh@926 -- # '[' -z 63902 ']' 00:11:32.647 05:08:51 -- common/autotest_common.sh@930 -- # kill -0 63902 00:11:32.647 05:08:51 -- common/autotest_common.sh@931 -- # uname 00:11:32.647 05:08:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:32.647 05:08:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63902 00:11:32.647 05:08:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:32.647 05:08:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:32.647 05:08:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63902' 00:11:32.647 killing process with pid 63902 00:11:32.647 05:08:51 -- common/autotest_common.sh@945 -- # kill 63902 00:11:32.647 05:08:51 -- common/autotest_common.sh@950 -- # wait 63902 00:11:35.196 00:11:35.196 real 0m4.762s 00:11:35.196 user 0m5.038s 00:11:35.196 sys 0m0.573s 00:11:35.196 05:08:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.196 ************************************ 00:11:35.196 END TEST bdev_gpt_uuid 00:11:35.196 ************************************ 00:11:35.196 05:08:54 -- common/autotest_common.sh@10 -- # set +x 00:11:35.196 05:08:54 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:11:35.196 05:08:54 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:35.196 05:08:54 -- bdev/blockdev.sh@809 -- # cleanup 00:11:35.196 05:08:54 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:35.196 05:08:54 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:35.196 05:08:54 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:11:35.196 05:08:54 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:11:35.196 05:08:54 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:11:35.196 05:08:54 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:35.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:35.762 Waiting for block devices as requested 00:11:35.762 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.020 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.020 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.020 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:41.288 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:41.288 05:09:00 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:11:41.288 05:09:00 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:11:41.548 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:41.548 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:41.548 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:41.548 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:11:41.548 05:09:00 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:11:41.548 00:11:41.548 real 1m9.504s 00:11:41.548 user 1m26.855s 00:11:41.548 sys 0m11.878s 00:11:41.548 05:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.548 ************************************ 00:11:41.548 END TEST blockdev_nvme_gpt 00:11:41.548 ************************************ 00:11:41.548 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.548 05:09:00 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:41.548 05:09:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:41.548 05:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:41.548 05:09:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.548 ************************************ 00:11:41.548 START TEST nvme 00:11:41.548 ************************************ 00:11:41.548 05:09:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:41.807 * Looking for test storage... 00:11:41.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.807 05:09:00 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:42.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:43.000 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.000 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.000 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.000 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.259 05:09:02 -- nvme/nvme.sh@79 -- # uname 00:11:43.259 05:09:02 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:43.259 05:09:02 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:43.259 05:09:02 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:43.259 05:09:02 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:43.259 05:09:02 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:11:43.259 05:09:02 -- common/autotest_common.sh@1045 -- # echo 0 00:11:43.259 05:09:02 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:43.259 05:09:02 -- common/autotest_common.sh@1047 -- # stubpid=64594 00:11:43.259 Waiting for stub to ready for secondary processes... 00:11:43.259 05:09:02 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:11:43.259 05:09:02 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:43.259 05:09:02 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64594 ]] 00:11:43.259 05:09:02 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:43.259 [2024-07-26 05:09:02.185631] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:43.259 [2024-07-26 05:09:02.185734] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.197 05:09:03 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:44.197 05:09:03 -- common/autotest_common.sh@1051 -- # [[ -e /proc/64594 ]] 00:11:44.197 05:09:03 -- common/autotest_common.sh@1052 -- # sleep 1s 00:11:44.197 [2024-07-26 05:09:03.185287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:44.458 [2024-07-26 05:09:03.494884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.458 [2024-07-26 05:09:03.495038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.458 [2024-07-26 05:09:03.495080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.458 [2024-07-26 05:09:03.523474] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.458 [2024-07-26 05:09:03.537515] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:44.458 [2024-07-26 05:09:03.537717] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:44.458 [2024-07-26 05:09:03.555294] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.458 [2024-07-26 05:09:03.555473] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:44.458 [2024-07-26 05:09:03.555606] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:44.720 [2024-07-26 05:09:03.568067] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.720 [2024-07-26 05:09:03.568233] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:44.720 [2024-07-26 05:09:03.568376] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:44.720 [2024-07-26 05:09:03.581013] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:44.720 [2024-07-26 05:09:03.581168] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:44.720 [2024-07-26 05:09:03.581324] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:44.720 [2024-07-26 05:09:03.581447] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:44.720 [2024-07-26 05:09:03.581599] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:45.306 05:09:04 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:45.307 done. 00:11:45.307 05:09:04 -- common/autotest_common.sh@1054 -- # echo done. 00:11:45.307 05:09:04 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:45.307 05:09:04 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:45.307 05:09:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.307 05:09:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 ************************************ 00:11:45.307 START TEST nvme_reset 00:11:45.307 ************************************ 00:11:45.307 05:09:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:45.566 Initializing NVMe Controllers 00:11:45.566 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:45.566 Skipping QEMU NVMe SSD at 0000:00:07.0 00:11:45.566 Skipping QEMU NVMe SSD at 0000:00:09.0 00:11:45.566 Skipping QEMU NVMe SSD at 0000:00:08.0 00:11:45.566 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:45.566 00:11:45.566 real 0m0.326s 00:11:45.566 user 0m0.132s 00:11:45.566 sys 0m0.154s 00:11:45.566 05:09:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.566 05:09:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.566 ************************************ 00:11:45.566 END TEST nvme_reset 00:11:45.566 ************************************ 00:11:45.566 05:09:04 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:45.566 05:09:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:45.566 05:09:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.566 05:09:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.566 ************************************ 00:11:45.566 START TEST nvme_identify 00:11:45.566 ************************************ 00:11:45.566 05:09:04 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:45.566 05:09:04 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:45.566 05:09:04 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:45.566 05:09:04 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:45.566 05:09:04 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:45.566 05:09:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:45.566 05:09:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:45.566 05:09:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:45.566 05:09:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:45.566 05:09:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:45.566 05:09:04 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:45.566 05:09:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:45.566 05:09:04 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:46.137 ===================================================== 00:11:46.137 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:46.137 ===================================================== 00:11:46.137 Controller Capabilities/Features 00:11:46.137 ================================ 00:11:46.137 Vendor ID: 1b36 00:11:46.137 Subsystem Vendor ID: 1af4 00:11:46.137 Serial Number: 12340 00:11:46.137 Model Number: QEMU NVMe Ctrl 00:11:46.137 Firmware Version: 8.0.0 00:11:46.137 Recommended Arb Burst: 6 00:11:46.137 IEEE OUI Identifier: 00 54 52 00:11:46.137 Multi-path I/O 00:11:46.137 May have multiple subsystem ports: No 00:11:46.137 May have multiple controllers: No 00:11:46.137 Associated with SR-IOV VF: No 00:11:46.137 Max Data Transfer Size: 524288 00:11:46.137 Max Number of Namespaces: 256 00:11:46.137 Max Number of I/O Queues: 64 00:11:46.137 NVMe Specification Version (VS): 1.4 00:11:46.137 NVMe Specification Version (Identify): 1.4 00:11:46.137 Maximum Queue Entries: 2048 00:11:46.137 Contiguous Queues Required: Yes 00:11:46.137 Arbitration Mechanisms Supported 00:11:46.137 Weighted Round Robin: Not Supported 00:11:46.137 Vendor Specific: Not Supported 00:11:46.137 Reset Timeout: 7500 ms 00:11:46.137 Doorbell Stride: 4 bytes 00:11:46.137 NVM Subsystem Reset: Not Supported 00:11:46.137 Command Sets Supported 00:11:46.137 NVM Command Set: Supported 00:11:46.137 Boot Partition: Not Supported 00:11:46.137 Memory Page Size Minimum: 4096 bytes 00:11:46.137 Memory Page Size Maximum: 65536 bytes 00:11:46.137 Persistent Memory Region: Not Supported 00:11:46.137 Optional Asynchronous Events Supported 00:11:46.137 Namespace Attribute Notices: Supported 00:11:46.137 Firmware Activation Notices: Not Supported 00:11:46.137 ANA Change Notices: Not Supported 00:11:46.137 PLE Aggregate Log Change Notices: Not Supported 00:11:46.137 LBA Status Info Alert Notices: Not Supported 00:11:46.137 EGE Aggregate Log Change Notices: Not Supported 00:11:46.137 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.137 Zone Descriptor Change Notices: Not Supported 00:11:46.137 Discovery Log Change Notices: Not Supported 00:11:46.137 Controller Attributes 00:11:46.137 128-bit Host Identifier: Not Supported 00:11:46.137 Non-Operational Permissive Mode: Not Supported 00:11:46.137 NVM Sets: Not Supported 00:11:46.137 Read Recovery Levels: Not Supported 00:11:46.137 Endurance Groups: Not Supported 00:11:46.137 Predictable Latency Mode: Not Supported 00:11:46.137 Traffic Based Keep ALive: Not Supported 00:11:46.137 Namespace Granularity: Not Supported 00:11:46.137 SQ Associations: Not Supported 00:11:46.137 UUID List: Not Supported 00:11:46.137 Multi-Domain Subsystem: Not Supported 00:11:46.137 Fixed Capacity Management: Not Supported 00:11:46.137 Variable Capacity Management: Not Supported 00:11:46.137 Delete Endurance Group: Not Supported 00:11:46.137 Delete NVM Set: Not Supported 00:11:46.137 Extended LBA Formats Supported: Supported 00:11:46.137 Flexible Data Placement Supported: Not Supported 00:11:46.137 00:11:46.137 Controller Memory Buffer Support 00:11:46.137 ================================ 00:11:46.137 Supported: No 00:11:46.137 00:11:46.137 Persistent Memory Region Support 00:11:46.137 ================================ 00:11:46.137 Supported: No 00:11:46.137 00:11:46.137 Admin Command Set Attributes 00:11:46.137 ============================ 00:11:46.137 Security Send/Receive: Not Supported 00:11:46.137 Format NVM: Supported 00:11:46.137 Firmware Activate/Download: Not Supported 00:11:46.137 Namespace Management: Supported 00:11:46.137 Device Self-Test: Not Supported 00:11:46.137 Directives: Supported 00:11:46.137 NVMe-MI: Not Supported 00:11:46.137 Virtualization Management: Not Supported 00:11:46.137 Doorbell Buffer Config: Supported 00:11:46.137 Get LBA Status Capability: Not Supported 00:11:46.137 Command & Feature Lockdown Capability: Not Supported 00:11:46.137 Abort Command Limit: 4 00:11:46.137 Async Event Request Limit: 4 00:11:46.137 Number of Firmware Slots: N/A 00:11:46.137 Firmware Slot 1 Read-Only: N/A 00:11:46.137 Firmware Activation Without Reset: N/A 00:11:46.137 Multiple Update Detection Support: N/A 00:11:46.137 Firmware Update Gr[2024-07-26 05:09:04.957640] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 64636 terminated unexpected 00:11:46.137 anularity: No Information Provided 00:11:46.137 Per-Namespace SMART Log: Yes 00:11:46.137 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.137 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:46.137 Command Effects Log Page: Supported 00:11:46.137 Get Log Page Extended Data: Supported 00:11:46.137 Telemetry Log Pages: Not Supported 00:11:46.137 Persistent Event Log Pages: Not Supported 00:11:46.137 Supported Log Pages Log Page: May Support 00:11:46.137 Commands Supported & Effects Log Page: Not Supported 00:11:46.137 Feature Identifiers & Effects Log Page:May Support 00:11:46.137 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.137 Data Area 4 for Telemetry Log: Not Supported 00:11:46.137 Error Log Page Entries Supported: 1 00:11:46.137 Keep Alive: Not Supported 00:11:46.137 00:11:46.137 NVM Command Set Attributes 00:11:46.137 ========================== 00:11:46.138 Submission Queue Entry Size 00:11:46.138 Max: 64 00:11:46.138 Min: 64 00:11:46.138 Completion Queue Entry Size 00:11:46.138 Max: 16 00:11:46.138 Min: 16 00:11:46.138 Number of Namespaces: 256 00:11:46.138 Compare Command: Supported 00:11:46.138 Write Uncorrectable Command: Not Supported 00:11:46.138 Dataset Management Command: Supported 00:11:46.138 Write Zeroes Command: Supported 00:11:46.138 Set Features Save Field: Supported 00:11:46.138 Reservations: Not Supported 00:11:46.138 Timestamp: Supported 00:11:46.138 Copy: Supported 00:11:46.138 Volatile Write Cache: Present 00:11:46.138 Atomic Write Unit (Normal): 1 00:11:46.138 Atomic Write Unit (PFail): 1 00:11:46.138 Atomic Compare & Write Unit: 1 00:11:46.138 Fused Compare & Write: Not Supported 00:11:46.138 Scatter-Gather List 00:11:46.138 SGL Command Set: Supported 00:11:46.138 SGL Keyed: Not Supported 00:11:46.138 SGL Bit Bucket Descriptor: Not Supported 00:11:46.138 SGL Metadata Pointer: Not Supported 00:11:46.138 Oversized SGL: Not Supported 00:11:46.138 SGL Metadata Address: Not Supported 00:11:46.138 SGL Offset: Not Supported 00:11:46.138 Transport SGL Data Block: Not Supported 00:11:46.138 Replay Protected Memory Block: Not Supported 00:11:46.138 00:11:46.138 Firmware Slot Information 00:11:46.138 ========================= 00:11:46.138 Active slot: 1 00:11:46.138 Slot 1 Firmware Revision: 1.0 00:11:46.138 00:11:46.138 00:11:46.138 Commands Supported and Effects 00:11:46.138 ============================== 00:11:46.138 Admin Commands 00:11:46.138 -------------- 00:11:46.138 Delete I/O Submission Queue (00h): Supported 00:11:46.138 Create I/O Submission Queue (01h): Supported 00:11:46.138 Get Log Page (02h): Supported 00:11:46.138 Delete I/O Completion Queue (04h): Supported 00:11:46.138 Create I/O Completion Queue (05h): Supported 00:11:46.138 Identify (06h): Supported 00:11:46.138 Abort (08h): Supported 00:11:46.138 Set Features (09h): Supported 00:11:46.138 Get Features (0Ah): Supported 00:11:46.138 Asynchronous Event Request (0Ch): Supported 00:11:46.138 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.138 Directive Send (19h): Supported 00:11:46.138 Directive Receive (1Ah): Supported 00:11:46.138 Virtualization Management (1Ch): Supported 00:11:46.138 Doorbell Buffer Config (7Ch): Supported 00:11:46.138 Format NVM (80h): Supported LBA-Change 00:11:46.138 I/O Commands 00:11:46.138 ------------ 00:11:46.138 Flush (00h): Supported LBA-Change 00:11:46.138 Write (01h): Supported LBA-Change 00:11:46.138 Read (02h): Supported 00:11:46.138 Compare (05h): Supported 00:11:46.138 Write Zeroes (08h): Supported LBA-Change 00:11:46.138 Dataset Management (09h): Supported LBA-Change 00:11:46.138 Unknown (0Ch): Supported 00:11:46.138 Unknown (12h): Supported 00:11:46.138 Copy (19h): Supported LBA-Change 00:11:46.138 Unknown (1Dh): Supported LBA-Change 00:11:46.138 00:11:46.138 Error Log 00:11:46.138 ========= 00:11:46.138 00:11:46.138 Arbitration 00:11:46.138 =========== 00:11:46.138 Arbitration Burst: no limit 00:11:46.138 00:11:46.138 Power Management 00:11:46.138 ================ 00:11:46.138 Number of Power States: 1 00:11:46.138 Current Power State: Power State #0 00:11:46.138 Power State #0: 00:11:46.138 Max Power: 25.00 W 00:11:46.138 Non-Operational State: Operational 00:11:46.138 Entry Latency: 16 microseconds 00:11:46.138 Exit Latency: 4 microseconds 00:11:46.138 Relative Read Throughput: 0 00:11:46.138 Relative Read Latency: 0 00:11:46.138 Relative Write Throughput: 0 00:11:46.138 Relative Write Latency: 0 00:11:46.138 Idle Power[2024-07-26 05:09:04.959463] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 64636 terminated unexpected 00:11:46.138 : Not Reported 00:11:46.138 Active Power: Not Reported 00:11:46.138 Non-Operational Permissive Mode: Not Supported 00:11:46.138 00:11:46.138 Health Information 00:11:46.138 ================== 00:11:46.138 Critical Warnings: 00:11:46.138 Available Spare Space: OK 00:11:46.138 Temperature: OK 00:11:46.138 Device Reliability: OK 00:11:46.138 Read Only: No 00:11:46.138 Volatile Memory Backup: OK 00:11:46.138 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.138 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.138 Available Spare: 0% 00:11:46.138 Available Spare Threshold: 0% 00:11:46.138 Life Percentage Used: 0% 00:11:46.138 Data Units Read: 1580 00:11:46.138 Data Units Written: 710 00:11:46.138 Host Read Commands: 76234 00:11:46.138 Host Write Commands: 37687 00:11:46.138 Controller Busy Time: 0 minutes 00:11:46.138 Power Cycles: 0 00:11:46.138 Power On Hours: 0 hours 00:11:46.138 Unsafe Shutdowns: 0 00:11:46.138 Unrecoverable Media Errors: 0 00:11:46.138 Lifetime Error Log Entries: 0 00:11:46.138 Warning Temperature Time: 0 minutes 00:11:46.138 Critical Temperature Time: 0 minutes 00:11:46.138 00:11:46.138 Number of Queues 00:11:46.138 ================ 00:11:46.138 Number of I/O Submission Queues: 64 00:11:46.138 Number of I/O Completion Queues: 64 00:11:46.138 00:11:46.138 ZNS Specific Controller Data 00:11:46.138 ============================ 00:11:46.138 Zone Append Size Limit: 0 00:11:46.138 00:11:46.138 00:11:46.138 Active Namespaces 00:11:46.138 ================= 00:11:46.138 Namespace ID:1 00:11:46.138 Error Recovery Timeout: Unlimited 00:11:46.138 Command Set Identifier: NVM (00h) 00:11:46.138 Deallocate: Supported 00:11:46.138 Deallocated/Unwritten Error: Supported 00:11:46.138 Deallocated Read Value: All 0x00 00:11:46.138 Deallocate in Write Zeroes: Not Supported 00:11:46.138 Deallocated Guard Field: 0xFFFF 00:11:46.138 Flush: Supported 00:11:46.138 Reservation: Not Supported 00:11:46.138 Metadata Transferred as: Separate Metadata Buffer 00:11:46.138 Namespace Sharing Capabilities: Private 00:11:46.138 Size (in LBAs): 1548666 (5GiB) 00:11:46.138 Capacity (in LBAs): 1548666 (5GiB) 00:11:46.138 Utilization (in LBAs): 1548666 (5GiB) 00:11:46.138 Thin Provisioning: Not Supported 00:11:46.138 Per-NS Atomic Units: No 00:11:46.138 Maximum Single Source Range Length: 128 00:11:46.138 Maximum Copy Length: 128 00:11:46.138 Maximum Source Range Count: 128 00:11:46.138 NGUID/EUI64 Never Reused: No 00:11:46.138 Namespace Write Protected: No 00:11:46.138 Number of LBA Formats: 8 00:11:46.138 Current LBA Format: LBA Format #07 00:11:46.138 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.138 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.138 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.138 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.138 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.138 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.138 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.138 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.138 00:11:46.138 ===================================================== 00:11:46.138 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:46.138 ===================================================== 00:11:46.138 Controller Capabilities/Features 00:11:46.138 ================================ 00:11:46.138 Vendor ID: 1b36 00:11:46.139 Subsystem Vendor ID: 1af4 00:11:46.139 Serial Number: 12341 00:11:46.139 Model Number: QEMU NVMe Ctrl 00:11:46.139 Firmware Version: 8.0.0 00:11:46.139 Recommended Arb Burst: 6 00:11:46.139 IEEE OUI Identifier: 00 54 52 00:11:46.139 Multi-path I/O 00:11:46.139 May have multiple subsystem ports: No 00:11:46.139 May have multiple controllers: No 00:11:46.139 Associated with SR-IOV VF: No 00:11:46.139 Max Data Transfer Size: 524288 00:11:46.139 Max Number of Namespaces: 256 00:11:46.139 Max Number of I/O Queues: 64 00:11:46.139 NVMe Specification Version (VS): 1.4 00:11:46.139 NVMe Specification Version (Identify): 1.4 00:11:46.139 Maximum Queue Entries: 2048 00:11:46.139 Contiguous Queues Required: Yes 00:11:46.139 Arbitration Mechanisms Supported 00:11:46.139 Weighted Round Robin: Not Supported 00:11:46.139 Vendor Specific: Not Supported 00:11:46.139 Reset Timeout: 7500 ms 00:11:46.139 Doorbell Stride: 4 bytes 00:11:46.139 NVM Subsystem Reset: Not Supported 00:11:46.139 Command Sets Supported 00:11:46.139 NVM Command Set: Supported 00:11:46.139 Boot Partition: Not Supported 00:11:46.139 Memory Page Size Minimum: 4096 bytes 00:11:46.139 Memory Page Size Maximum: 65536 bytes 00:11:46.139 Persistent Memory Region: Not Supported 00:11:46.139 Optional Asynchronous Events Supported 00:11:46.139 Namespace Attribute Notices: Supported 00:11:46.139 Firmware Activation Notices: Not Supported 00:11:46.139 ANA Change Notices: Not Supported 00:11:46.139 PLE Aggregate Log Change Notices: Not Supported 00:11:46.139 LBA Status Info Alert Notices: Not Supported 00:11:46.139 EGE Aggregate Log Change Notices: Not Supported 00:11:46.139 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.139 Zone Descriptor Change Notices: Not Supported 00:11:46.139 Discovery Log Change Notices: Not Supported 00:11:46.139 Controller Attributes 00:11:46.139 128-bit Host Identifier: Not Supported 00:11:46.139 Non-Operational Permissive Mode: Not Supported 00:11:46.139 NVM Sets: Not Supported 00:11:46.139 Read Recovery Levels: Not Supported 00:11:46.139 Endurance Groups: Not Supported 00:11:46.139 Predictable Latency Mode: Not Supported 00:11:46.139 Traffic Based Keep ALive: Not Supported 00:11:46.139 Namespace Granularity: Not Supported 00:11:46.139 SQ Associations: Not Supported 00:11:46.139 UUID List: Not Supported 00:11:46.139 Multi-Domain Subsystem: Not Supported 00:11:46.139 Fixed Capacity Management: Not Supported 00:11:46.139 Variable Capacity Management: Not Supported 00:11:46.139 Delete Endurance Group: Not Supported 00:11:46.139 Delete NVM Set: Not Supported 00:11:46.139 Extended LBA Formats Supported: Supported 00:11:46.139 Flexible Data Placement Supported: Not Supported 00:11:46.139 00:11:46.139 Controller Memory Buffer Support 00:11:46.139 ================================ 00:11:46.139 Supported: No 00:11:46.139 00:11:46.139 Persistent Memory Region Support 00:11:46.139 ================================ 00:11:46.139 Supported: No 00:11:46.139 00:11:46.139 Admin Command Set Attributes 00:11:46.139 ============================ 00:11:46.139 Security Send/Receive: Not Supported 00:11:46.139 Format NVM: Supported 00:11:46.139 Firmware Activate/Download: Not Supported 00:11:46.139 Namespace Management: Supported 00:11:46.139 Device Self-Test: Not Supported 00:11:46.139 Directives: Supported 00:11:46.139 NVMe-MI: Not Supported 00:11:46.139 Virtualization Management: Not Supported 00:11:46.139 Doorbell Buffer Config: Supported 00:11:46.139 Get LBA Status Capability: Not Supported 00:11:46.139 Command & Feature Lockdown Capability: Not Supported 00:11:46.139 Abort Command Limit: 4 00:11:46.139 Async Event Request Limit: 4 00:11:46.139 Number of Firmware Slots: N/A 00:11:46.139 Firmware Slot 1 Read-Only: N/A 00:11:46.139 Firmware Activation Without Reset: N/A 00:11:46.139 Multiple Update Detection Support: N/A 00:11:46.139 Firmware Update Granularity: No Information Provided 00:11:46.139 Per-Namespace SMART Log: Yes 00:11:46.139 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.139 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:46.139 Command Effects Log Page: Supported 00:11:46.139 Get Log Page Extended Data: Supported 00:11:46.139 Telemetry Log Pages: Not Supported 00:11:46.139 Persistent Event Log Pages: Not Supported 00:11:46.139 Supported Log Pages Log Page: May Support 00:11:46.139 Commands Supported & Effects Log Page: Not Supported 00:11:46.139 Feature Identifiers & Effects Log Page:May Support 00:11:46.139 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.139 Data Area 4 for Telemetry Log: Not Supported 00:11:46.139 Error Log Page Entries Supported: 1 00:11:46.139 Keep Alive: Not Supported 00:11:46.139 00:11:46.139 NVM Command Set Attributes 00:11:46.139 ========================== 00:11:46.139 Submission Queue Entry Size 00:11:46.139 Max: 64 00:11:46.139 Min: 64 00:11:46.139 Completion Queue Entry Size 00:11:46.139 Max: 16 00:11:46.139 Min: 16 00:11:46.139 Number of Namespaces: 256 00:11:46.139 Compare Command: Supported 00:11:46.139 Write Uncorrectable Command: Not Supported 00:11:46.139 Dataset Management Command: Supported 00:11:46.139 Write Zeroes Command: Supported 00:11:46.139 Set Features Save Field: Supported 00:11:46.139 Reservations: Not Supported 00:11:46.139 Timestamp: Supported 00:11:46.139 Copy: Supported 00:11:46.139 Volatile Write Cache: Present 00:11:46.139 Atomic Write Unit (Normal): 1 00:11:46.139 Atomic Write Unit (PFail): 1 00:11:46.139 Atomic Compare & Write Unit: 1 00:11:46.139 Fused Compare & Write: Not Supported 00:11:46.139 Scatter-Gather List 00:11:46.139 SGL Command Set: Supported 00:11:46.139 SGL Keyed: Not Supported 00:11:46.139 SGL Bit Bucket Descriptor: Not Supported 00:11:46.139 SGL Metadata Pointer: Not Supported 00:11:46.139 Oversized SGL: Not Supported 00:11:46.139 SGL Metadata Address: Not Supported 00:11:46.139 SGL Offset: Not Supported 00:11:46.139 Transport SGL Data Block: Not Supported 00:11:46.139 Replay Protected Memory Block: Not Supported 00:11:46.139 00:11:46.139 Firmware Slot Information 00:11:46.139 ========================= 00:11:46.139 Active slot: 1 00:11:46.139 Slot 1 Firmware Revision: 1.0 00:11:46.139 00:11:46.139 00:11:46.139 Commands Supported and Effects 00:11:46.139 ============================== 00:11:46.139 Admin Commands 00:11:46.139 -------------- 00:11:46.139 Delete I/O Submission Queue (00h): Supported 00:11:46.139 Create I/O Submission Queue (01h): Supported 00:11:46.139 Get Log Page (02h): Supported 00:11:46.139 Delete I/O Completion Queue (04h): Supported 00:11:46.139 Create I/O Completion Queue (05h): Supported 00:11:46.139 Identify (06h): Supported 00:11:46.139 Abort (08h): Supported 00:11:46.139 Set Features (09h): Supported 00:11:46.139 Get Features (0Ah): Supported 00:11:46.139 Asynchronous Event Request (0Ch): Supported 00:11:46.139 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.139 Directive Send (19h): Supported 00:11:46.139 Directive Receive (1Ah): Supported 00:11:46.139 Virtualization Management (1Ch): Supported 00:11:46.139 Doorbell Buffer Config (7Ch): Supported 00:11:46.139 Format NVM (80h): Supported LBA-Change 00:11:46.139 I/O Commands 00:11:46.139 ------------ 00:11:46.139 Flush (00h): Supported LBA-Change 00:11:46.139 Write (01h): Supported LBA-Change 00:11:46.139 Read (02h): Supported 00:11:46.139 Compare (05h): Supported 00:11:46.139 Write Zeroes (08h): Supported LBA-Change 00:11:46.139 Dataset Management (09h): Supported LBA-Change 00:11:46.139 Unknown (0Ch): Supported 00:11:46.139 Unknown (12h): Supported 00:11:46.139 Copy (19h): Supported LBA-Change 00:11:46.139 Unknown (1Dh): Supported LBA-Change 00:11:46.139 00:11:46.139 Error Log 00:11:46.139 ========= 00:11:46.139 00:11:46.139 Arbitration 00:11:46.139 =========== 00:11:46.139 Arbitration Burst: no limit 00:11:46.139 00:11:46.139 Power Management 00:11:46.139 ================ 00:11:46.139 Number of Power States: 1 00:11:46.139 Current Power State: Power State #0 00:11:46.139 Power State #0: 00:11:46.139 Max Power: 25.00 W 00:11:46.139 Non-Operational State: Operational 00:11:46.139 Entry Latency: 16 microseconds 00:11:46.139 Exit Latency: 4 microseconds 00:11:46.139 Relative Read Throughput: 0 00:11:46.139 Relative Read Latency: 0 00:11:46.139 Relative Write Throughput: 0 00:11:46.139 Relative Write Latency: 0 00:11:46.140 Idle Power: Not Reported 00:11:46.140 Active Power: Not Reported 00:11:46.140 Non-Operational Permissive Mode: Not Supported 00:11:46.140 00:11:46.140 Health Information 00:11:46.140 ================== 00:11:46.140 Critical Warnings: 00:11:46.140 Available Spare Space: OK 00:11:46.140 Temperature: OK 00:11:46.140 Device Reliability: OK 00:11:46.140 Read Only: No 00:11:46.140 Volatile Memory Backup: OK 00:11:46.140 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.140 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.140 Available Spare: 0% 00:11:46.140 Available Spare Threshold: 0% 00:11:46.140 Life Percentage Used: 0% 00:11:46.140 Data Units Read: 1142 00:11:46.140 Data Units Written: 523 00:11:46.140 Host Read Commands: 55536 00:11:46.140 Host Write Commands: 27264 00:11:46.140 Controller Busy Time: 0 minutes 00:11:46.140 Power Cycles: 0 00:11:46.140 Power On Hours: 0 hours 00:11:46.140 Unsafe Shutdowns: 0 00:11:46.140 Unrecoverable Media Errors: 0 00:11:46.140 Lifetime Error Log Entries: 0 00:11:46.140 Warning Temperature Time: 0 minutes 00:11:46.140 Critical Temperature Time: 0 minutes 00:11:46.140 00:11:46.140 Number of Queues 00:11:46.140 ================ 00:11:46.140 Number of I/O Submission Queues: 64 00:11:46.140 Number of I/O Completion Queues: 64 00:11:46.140 00:11:46.140 ZNS Specific Controller Data 00:11:46.140 ============================ 00:11:46.140 Zone Append Size Limit: 0 00:11:46.140 00:11:46.140 00:11:46.140 Active Namespaces 00:11:46.140 ================= 00:11:46.140 Namespace ID:1 00:11:46.140 Error Recovery Timeout: Unlimited 00:11:46.140 Command Set Identifier: [2024-07-26 05:09:04.960823] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 64636 terminated unexpected 00:11:46.140 NVM (00h) 00:11:46.140 Deallocate: Supported 00:11:46.140 Deallocated/Unwritten Error: Supported 00:11:46.140 Deallocated Read Value: All 0x00 00:11:46.140 Deallocate in Write Zeroes: Not Supported 00:11:46.140 Deallocated Guard Field: 0xFFFF 00:11:46.140 Flush: Supported 00:11:46.140 Reservation: Not Supported 00:11:46.140 Namespace Sharing Capabilities: Private 00:11:46.140 Size (in LBAs): 1310720 (5GiB) 00:11:46.140 Capacity (in LBAs): 1310720 (5GiB) 00:11:46.140 Utilization (in LBAs): 1310720 (5GiB) 00:11:46.140 Thin Provisioning: Not Supported 00:11:46.140 Per-NS Atomic Units: No 00:11:46.140 Maximum Single Source Range Length: 128 00:11:46.140 Maximum Copy Length: 128 00:11:46.140 Maximum Source Range Count: 128 00:11:46.140 NGUID/EUI64 Never Reused: No 00:11:46.140 Namespace Write Protected: No 00:11:46.140 Number of LBA Formats: 8 00:11:46.140 Current LBA Format: LBA Format #04 00:11:46.140 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.140 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.140 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.140 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.140 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.140 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.140 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.140 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.140 00:11:46.140 ===================================================== 00:11:46.140 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:46.140 ===================================================== 00:11:46.140 Controller Capabilities/Features 00:11:46.140 ================================ 00:11:46.140 Vendor ID: 1b36 00:11:46.140 Subsystem Vendor ID: 1af4 00:11:46.140 Serial Number: 12343 00:11:46.140 Model Number: QEMU NVMe Ctrl 00:11:46.140 Firmware Version: 8.0.0 00:11:46.140 Recommended Arb Burst: 6 00:11:46.140 IEEE OUI Identifier: 00 54 52 00:11:46.140 Multi-path I/O 00:11:46.140 May have multiple subsystem ports: No 00:11:46.140 May have multiple controllers: Yes 00:11:46.140 Associated with SR-IOV VF: No 00:11:46.140 Max Data Transfer Size: 524288 00:11:46.140 Max Number of Namespaces: 256 00:11:46.140 Max Number of I/O Queues: 64 00:11:46.140 NVMe Specification Version (VS): 1.4 00:11:46.140 NVMe Specification Version (Identify): 1.4 00:11:46.140 Maximum Queue Entries: 2048 00:11:46.140 Contiguous Queues Required: Yes 00:11:46.140 Arbitration Mechanisms Supported 00:11:46.140 Weighted Round Robin: Not Supported 00:11:46.140 Vendor Specific: Not Supported 00:11:46.140 Reset Timeout: 7500 ms 00:11:46.140 Doorbell Stride: 4 bytes 00:11:46.140 NVM Subsystem Reset: Not Supported 00:11:46.140 Command Sets Supported 00:11:46.140 NVM Command Set: Supported 00:11:46.140 Boot Partition: Not Supported 00:11:46.140 Memory Page Size Minimum: 4096 bytes 00:11:46.140 Memory Page Size Maximum: 65536 bytes 00:11:46.140 Persistent Memory Region: Not Supported 00:11:46.140 Optional Asynchronous Events Supported 00:11:46.140 Namespace Attribute Notices: Supported 00:11:46.140 Firmware Activation Notices: Not Supported 00:11:46.140 ANA Change Notices: Not Supported 00:11:46.140 PLE Aggregate Log Change Notices: Not Supported 00:11:46.140 LBA Status Info Alert Notices: Not Supported 00:11:46.140 EGE Aggregate Log Change Notices: Not Supported 00:11:46.140 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.140 Zone Descriptor Change Notices: Not Supported 00:11:46.140 Discovery Log Change Notices: Not Supported 00:11:46.140 Controller Attributes 00:11:46.140 128-bit Host Identifier: Not Supported 00:11:46.140 Non-Operational Permissive Mode: Not Supported 00:11:46.140 NVM Sets: Not Supported 00:11:46.140 Read Recovery Levels: Not Supported 00:11:46.140 Endurance Groups: Supported 00:11:46.140 Predictable Latency Mode: Not Supported 00:11:46.140 Traffic Based Keep ALive: Not Supported 00:11:46.140 Namespace Granularity: Not Supported 00:11:46.140 SQ Associations: Not Supported 00:11:46.140 UUID List: Not Supported 00:11:46.140 Multi-Domain Subsystem: Not Supported 00:11:46.140 Fixed Capacity Management: Not Supported 00:11:46.140 Variable Capacity Management: Not Supported 00:11:46.140 Delete Endurance Group: Not Supported 00:11:46.140 Delete NVM Set: Not Supported 00:11:46.140 Extended LBA Formats Supported: Supported 00:11:46.140 Flexible Data Placement Supported: Supported 00:11:46.140 00:11:46.140 Controller Memory Buffer Support 00:11:46.140 ================================ 00:11:46.140 Supported: No 00:11:46.140 00:11:46.140 Persistent Memory Region Support 00:11:46.140 ================================ 00:11:46.140 Supported: No 00:11:46.140 00:11:46.140 Admin Command Set Attributes 00:11:46.140 ============================ 00:11:46.140 Security Send/Receive: Not Supported 00:11:46.140 Format NVM: Supported 00:11:46.140 Firmware Activate/Download: Not Supported 00:11:46.140 Namespace Management: Supported 00:11:46.140 Device Self-Test: Not Supported 00:11:46.140 Directives: Supported 00:11:46.140 NVMe-MI: Not Supported 00:11:46.140 Virtualization Management: Not Supported 00:11:46.140 Doorbell Buffer Config: Supported 00:11:46.140 Get LBA Status Capability: Not Supported 00:11:46.140 Command & Feature Lockdown Capability: Not Supported 00:11:46.140 Abort Command Limit: 4 00:11:46.140 Async Event Request Limit: 4 00:11:46.140 Number of Firmware Slots: N/A 00:11:46.140 Firmware Slot 1 Read-Only: N/A 00:11:46.140 Firmware Activation Without Reset: N/A 00:11:46.140 Multiple Update Detection Support: N/A 00:11:46.140 Firmware Update Granularity: No Information Provided 00:11:46.140 Per-Namespace SMART Log: Yes 00:11:46.140 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.140 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:46.140 Command Effects Log Page: Supported 00:11:46.140 Get Log Page Extended Data: Supported 00:11:46.140 Telemetry Log Pages: Not Supported 00:11:46.140 Persistent Event Log Pages: Not Supported 00:11:46.140 Supported Log Pages Log Page: May Support 00:11:46.140 Commands Supported & Effects Log Page: Not Supported 00:11:46.140 Feature Identifiers & Effects Log Page:May Support 00:11:46.140 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.140 Data Area 4 for Telemetry Log: Not Supported 00:11:46.140 Error Log Page Entries Supported: 1 00:11:46.140 Keep Alive: Not Supported 00:11:46.140 00:11:46.140 NVM Command Set Attributes 00:11:46.140 ========================== 00:11:46.140 Submission Queue Entry Size 00:11:46.140 Max: 64 00:11:46.140 Min: 64 00:11:46.140 Completion Queue Entry Size 00:11:46.140 Max: 16 00:11:46.140 Min: 16 00:11:46.141 Number of Namespaces: 256 00:11:46.141 Compare Command: Supported 00:11:46.141 Write Uncorrectable Command: Not Supported 00:11:46.141 Dataset Management Command: Supported 00:11:46.141 Write Zeroes Command: Supported 00:11:46.141 Set Features Save Field: Supported 00:11:46.141 Reservations: Not Supported 00:11:46.141 Timestamp: Supported 00:11:46.141 Copy: Supported 00:11:46.141 Volatile Write Cache: Present 00:11:46.141 Atomic Write Unit (Normal): 1 00:11:46.141 Atomic Write Unit (PFail): 1 00:11:46.141 Atomic Compare & Write Unit: 1 00:11:46.141 Fused Compare & Write: Not Supported 00:11:46.141 Scatter-Gather List 00:11:46.141 SGL Command Set: Supported 00:11:46.141 SGL Keyed: Not Supported 00:11:46.141 SGL Bit Bucket Descriptor: Not Supported 00:11:46.141 SGL Metadata Pointer: Not Supported 00:11:46.141 Oversized SGL: Not Supported 00:11:46.141 SGL Metadata Address: Not Supported 00:11:46.141 SGL Offset: Not Supported 00:11:46.141 Transport SGL Data Block: Not Supported 00:11:46.141 Replay Protected Memory Block: Not Supported 00:11:46.141 00:11:46.141 Firmware Slot Information 00:11:46.141 ========================= 00:11:46.141 Active slot: 1 00:11:46.141 Slot 1 Firmware Revision: 1.0 00:11:46.141 00:11:46.141 00:11:46.141 Commands Supported and Effects 00:11:46.141 ============================== 00:11:46.141 Admin Commands 00:11:46.141 -------------- 00:11:46.141 Delete I/O Submission Queue (00h): Supported 00:11:46.141 Create I/O Submission Queue (01h): Supported 00:11:46.141 Get Log Page (02h): Supported 00:11:46.141 Delete I/O Completion Queue (04h): Supported 00:11:46.141 Create I/O Completion Queue (05h): Supported 00:11:46.141 Identify (06h): Supported 00:11:46.141 Abort (08h): Supported 00:11:46.141 Set Features (09h): Supported 00:11:46.141 Get Features (0Ah): Supported 00:11:46.141 Asynchronous Event Request (0Ch): Supported 00:11:46.141 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.141 Directive Send (19h): Supported 00:11:46.141 Directive Receive (1Ah): Supported 00:11:46.141 Virtualization Management (1Ch): Supported 00:11:46.141 Doorbell Buffer Config (7Ch): Supported 00:11:46.141 Format NVM (80h): Supported LBA-Change 00:11:46.141 I/O Commands 00:11:46.141 ------------ 00:11:46.141 Flush (00h): Supported LBA-Change 00:11:46.141 Write (01h): Supported LBA-Change 00:11:46.141 Read (02h): Supported 00:11:46.141 Compare (05h): Supported 00:11:46.141 Write Zeroes (08h): Supported LBA-Change 00:11:46.141 Dataset Management (09h): Supported LBA-Change 00:11:46.141 Unknown (0Ch): Supported 00:11:46.141 Unknown (12h): Supported 00:11:46.141 Copy (19h): Supported LBA-Change 00:11:46.141 Unknown (1Dh): Supported LBA-Change 00:11:46.141 00:11:46.141 Error Log 00:11:46.141 ========= 00:11:46.141 00:11:46.141 Arbitration 00:11:46.141 =========== 00:11:46.141 Arbitration Burst: no limit 00:11:46.141 00:11:46.141 Power Management 00:11:46.141 ================ 00:11:46.141 Number of Power States: 1 00:11:46.141 Current Power State: Power State #0 00:11:46.141 Power State #0: 00:11:46.141 Max Power: 25.00 W 00:11:46.141 Non-Operational State: Operational 00:11:46.141 Entry Latency: 16 microseconds 00:11:46.141 Exit Latency: 4 microseconds 00:11:46.141 Relative Read Throughput: 0 00:11:46.141 Relative Read Latency: 0 00:11:46.141 Relative Write Throughput: 0 00:11:46.141 Relative Write Latency: 0 00:11:46.141 Idle Power: Not Reported 00:11:46.141 Active Power: Not Reported 00:11:46.141 Non-Operational Permissive Mode: Not Supported 00:11:46.141 00:11:46.141 Health Information 00:11:46.141 ================== 00:11:46.141 Critical Warnings: 00:11:46.141 Available Spare Space: OK 00:11:46.141 Temperature: OK 00:11:46.141 Device Reliability: OK 00:11:46.141 Read Only: No 00:11:46.141 Volatile Memory Backup: OK 00:11:46.141 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.141 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.141 Available Spare: 0% 00:11:46.141 Available Spare Threshold: 0% 00:11:46.141 Life Percentage Used: 0% 00:11:46.141 Data Units Read: 1187 00:11:46.141 Data Units Written: 547 00:11:46.141 Host Read Commands: 56002 00:11:46.141 Host Write Commands: 27501 00:11:46.141 Controller Busy Time: 0 minutes 00:11:46.141 Power Cycles: 0 00:11:46.141 Power On Hours: 0 hours 00:11:46.141 Unsafe Shutdowns: 0 00:11:46.141 Unrecoverable Media Errors: 0 00:11:46.141 Lifetime Error Log Entries: 0 00:11:46.141 Warning Temperature Time: 0 minutes 00:11:46.141 Critical Temperature Time: 0 minutes 00:11:46.141 00:11:46.141 Number of Queues 00:11:46.141 ================ 00:11:46.141 Number of I/O Submission Queues: 64 00:11:46.141 Number of I/O Completion Queues: 64 00:11:46.141 00:11:46.141 ZNS Specific Controller Data 00:11:46.141 ============================ 00:11:46.141 Zone Append Size Limit: 0 00:11:46.141 00:11:46.141 00:11:46.141 Active Namespaces 00:11:46.141 ================= 00:11:46.141 Namespace ID:1 00:11:46.141 Error Recovery Timeout: Unlimited 00:11:46.141 Command Set Identifier: NVM (00h) 00:11:46.141 Deallocate: Supported 00:11:46.141 Deallocated/Unwritten Error: Supported 00:11:46.141 Deallocated Read Value: All 0x00 00:11:46.141 Deallocate in Write Zeroes: Not Supported 00:11:46.141 Deallocated Guard Field: 0xFFFF 00:11:46.141 Flush: Supported 00:11:46.141 Reservation: Not Supported 00:11:46.141 Namespace Sharing Capabilities: Multiple Controllers 00:11:46.141 Size (in LBAs): 262144 (1GiB) 00:11:46.141 Capacity (in LBAs): 262144 (1GiB) 00:11:46.141 Utilization (in LBAs): 262144 (1GiB) 00:11:46.141 Thin Provisioning: Not Supported 00:11:46.141 Per-NS Atomic Units: No 00:11:46.141 Maximum Single Source Range Length: 128 00:11:46.141 Maximum Copy Length: 128 00:11:46.141 Maximum Source Range Count: 128 00:11:46.141 NGUID/EUI64 Never Reused: No 00:11:46.141 Namespace Write Protected: No 00:11:46.141 Endurance group ID: 1 00:11:46.141 Number of LBA Formats: 8 00:11:46.141 Current LBA Format: LBA Format #04 00:11:46.141 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.141 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.141 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.141 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.141 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.141 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.141 LBA Format #06: Data Si[2024-07-26 05:09:04.963116] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 64636 terminated unexpected 00:11:46.141 ze: 4096 Metadata Size: 16 00:11:46.141 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.141 00:11:46.141 Get Feature FDP: 00:11:46.141 ================ 00:11:46.141 Enabled: Yes 00:11:46.141 FDP configuration index: 0 00:11:46.141 00:11:46.141 FDP configurations log page 00:11:46.141 =========================== 00:11:46.141 Number of FDP configurations: 1 00:11:46.141 Version: 0 00:11:46.141 Size: 112 00:11:46.141 FDP Configuration Descriptor: 0 00:11:46.141 Descriptor Size: 96 00:11:46.141 Reclaim Group Identifier format: 2 00:11:46.141 FDP Volatile Write Cache: Not Present 00:11:46.141 FDP Configuration: Valid 00:11:46.141 Vendor Specific Size: 0 00:11:46.141 Number of Reclaim Groups: 2 00:11:46.141 Number of Recalim Unit Handles: 8 00:11:46.141 Max Placement Identifiers: 128 00:11:46.141 Number of Namespaces Suppprted: 256 00:11:46.141 Reclaim unit Nominal Size: 6000000 bytes 00:11:46.141 Estimated Reclaim Unit Time Limit: Not Reported 00:11:46.141 RUH Desc #000: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #001: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #002: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #003: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #004: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #005: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #006: RUH Type: Initially Isolated 00:11:46.141 RUH Desc #007: RUH Type: Initially Isolated 00:11:46.141 00:11:46.141 FDP reclaim unit handle usage log page 00:11:46.141 ====================================== 00:11:46.141 Number of Reclaim Unit Handles: 8 00:11:46.141 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:46.141 RUH Usage Desc #001: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #002: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #003: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #004: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #005: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #006: RUH Attributes: Unused 00:11:46.142 RUH Usage Desc #007: RUH Attributes: Unused 00:11:46.142 00:11:46.142 FDP statistics log page 00:11:46.142 ======================= 00:11:46.142 Host bytes with metadata written: 373006336 00:11:46.142 Media bytes with metadata written: 373088256 00:11:46.142 Media bytes erased: 0 00:11:46.142 00:11:46.142 FDP events log page 00:11:46.142 =================== 00:11:46.142 Number of FDP events: 0 00:11:46.142 00:11:46.142 ===================================================== 00:11:46.142 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:46.142 ===================================================== 00:11:46.142 Controller Capabilities/Features 00:11:46.142 ================================ 00:11:46.142 Vendor ID: 1b36 00:11:46.142 Subsystem Vendor ID: 1af4 00:11:46.142 Serial Number: 12342 00:11:46.142 Model Number: QEMU NVMe Ctrl 00:11:46.142 Firmware Version: 8.0.0 00:11:46.142 Recommended Arb Burst: 6 00:11:46.142 IEEE OUI Identifier: 00 54 52 00:11:46.142 Multi-path I/O 00:11:46.142 May have multiple subsystem ports: No 00:11:46.142 May have multiple controllers: No 00:11:46.142 Associated with SR-IOV VF: No 00:11:46.142 Max Data Transfer Size: 524288 00:11:46.142 Max Number of Namespaces: 256 00:11:46.142 Max Number of I/O Queues: 64 00:11:46.142 NVMe Specification Version (VS): 1.4 00:11:46.142 NVMe Specification Version (Identify): 1.4 00:11:46.142 Maximum Queue Entries: 2048 00:11:46.142 Contiguous Queues Required: Yes 00:11:46.142 Arbitration Mechanisms Supported 00:11:46.142 Weighted Round Robin: Not Supported 00:11:46.142 Vendor Specific: Not Supported 00:11:46.142 Reset Timeout: 7500 ms 00:11:46.142 Doorbell Stride: 4 bytes 00:11:46.142 NVM Subsystem Reset: Not Supported 00:11:46.142 Command Sets Supported 00:11:46.142 NVM Command Set: Supported 00:11:46.142 Boot Partition: Not Supported 00:11:46.142 Memory Page Size Minimum: 4096 bytes 00:11:46.142 Memory Page Size Maximum: 65536 bytes 00:11:46.142 Persistent Memory Region: Not Supported 00:11:46.142 Optional Asynchronous Events Supported 00:11:46.142 Namespace Attribute Notices: Supported 00:11:46.142 Firmware Activation Notices: Not Supported 00:11:46.142 ANA Change Notices: Not Supported 00:11:46.142 PLE Aggregate Log Change Notices: Not Supported 00:11:46.142 LBA Status Info Alert Notices: Not Supported 00:11:46.142 EGE Aggregate Log Change Notices: Not Supported 00:11:46.142 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.142 Zone Descriptor Change Notices: Not Supported 00:11:46.142 Discovery Log Change Notices: Not Supported 00:11:46.142 Controller Attributes 00:11:46.142 128-bit Host Identifier: Not Supported 00:11:46.142 Non-Operational Permissive Mode: Not Supported 00:11:46.142 NVM Sets: Not Supported 00:11:46.142 Read Recovery Levels: Not Supported 00:11:46.142 Endurance Groups: Not Supported 00:11:46.142 Predictable Latency Mode: Not Supported 00:11:46.142 Traffic Based Keep ALive: Not Supported 00:11:46.142 Namespace Granularity: Not Supported 00:11:46.142 SQ Associations: Not Supported 00:11:46.142 UUID List: Not Supported 00:11:46.142 Multi-Domain Subsystem: Not Supported 00:11:46.142 Fixed Capacity Management: Not Supported 00:11:46.142 Variable Capacity Management: Not Supported 00:11:46.142 Delete Endurance Group: Not Supported 00:11:46.142 Delete NVM Set: Not Supported 00:11:46.142 Extended LBA Formats Supported: Supported 00:11:46.142 Flexible Data Placement Supported: Not Supported 00:11:46.142 00:11:46.142 Controller Memory Buffer Support 00:11:46.142 ================================ 00:11:46.142 Supported: No 00:11:46.142 00:11:46.142 Persistent Memory Region Support 00:11:46.142 ================================ 00:11:46.142 Supported: No 00:11:46.142 00:11:46.142 Admin Command Set Attributes 00:11:46.142 ============================ 00:11:46.142 Security Send/Receive: Not Supported 00:11:46.142 Format NVM: Supported 00:11:46.142 Firmware Activate/Download: Not Supported 00:11:46.142 Namespace Management: Supported 00:11:46.142 Device Self-Test: Not Supported 00:11:46.142 Directives: Supported 00:11:46.142 NVMe-MI: Not Supported 00:11:46.142 Virtualization Management: Not Supported 00:11:46.142 Doorbell Buffer Config: Supported 00:11:46.142 Get LBA Status Capability: Not Supported 00:11:46.142 Command & Feature Lockdown Capability: Not Supported 00:11:46.142 Abort Command Limit: 4 00:11:46.142 Async Event Request Limit: 4 00:11:46.142 Number of Firmware Slots: N/A 00:11:46.142 Firmware Slot 1 Read-Only: N/A 00:11:46.142 Firmware Activation Without Reset: N/A 00:11:46.142 Multiple Update Detection Support: N/A 00:11:46.142 Firmware Update Granularity: No Information Provided 00:11:46.142 Per-Namespace SMART Log: Yes 00:11:46.142 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.142 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:46.142 Command Effects Log Page: Supported 00:11:46.142 Get Log Page Extended Data: Supported 00:11:46.142 Telemetry Log Pages: Not Supported 00:11:46.142 Persistent Event Log Pages: Not Supported 00:11:46.142 Supported Log Pages Log Page: May Support 00:11:46.142 Commands Supported & Effects Log Page: Not Supported 00:11:46.142 Feature Identifiers & Effects Log Page:May Support 00:11:46.142 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.142 Data Area 4 for Telemetry Log: Not Supported 00:11:46.142 Error Log Page Entries Supported: 1 00:11:46.142 Keep Alive: Not Supported 00:11:46.142 00:11:46.142 NVM Command Set Attributes 00:11:46.142 ========================== 00:11:46.142 Submission Queue Entry Size 00:11:46.142 Max: 64 00:11:46.142 Min: 64 00:11:46.142 Completion Queue Entry Size 00:11:46.142 Max: 16 00:11:46.142 Min: 16 00:11:46.142 Number of Namespaces: 256 00:11:46.142 Compare Command: Supported 00:11:46.142 Write Uncorrectable Command: Not Supported 00:11:46.142 Dataset Management Command: Supported 00:11:46.142 Write Zeroes Command: Supported 00:11:46.142 Set Features Save Field: Supported 00:11:46.142 Reservations: Not Supported 00:11:46.142 Timestamp: Supported 00:11:46.142 Copy: Supported 00:11:46.142 Volatile Write Cache: Present 00:11:46.142 Atomic Write Unit (Normal): 1 00:11:46.142 Atomic Write Unit (PFail): 1 00:11:46.142 Atomic Compare & Write Unit: 1 00:11:46.142 Fused Compare & Write: Not Supported 00:11:46.142 Scatter-Gather List 00:11:46.142 SGL Command Set: Supported 00:11:46.142 SGL Keyed: Not Supported 00:11:46.142 SGL Bit Bucket Descriptor: Not Supported 00:11:46.142 SGL Metadata Pointer: Not Supported 00:11:46.142 Oversized SGL: Not Supported 00:11:46.142 SGL Metadata Address: Not Supported 00:11:46.142 SGL Offset: Not Supported 00:11:46.143 Transport SGL Data Block: Not Supported 00:11:46.143 Replay Protected Memory Block: Not Supported 00:11:46.143 00:11:46.143 Firmware Slot Information 00:11:46.143 ========================= 00:11:46.143 Active slot: 1 00:11:46.143 Slot 1 Firmware Revision: 1.0 00:11:46.143 00:11:46.143 00:11:46.143 Commands Supported and Effects 00:11:46.143 ============================== 00:11:46.143 Admin Commands 00:11:46.143 -------------- 00:11:46.143 Delete I/O Submission Queue (00h): Supported 00:11:46.143 Create I/O Submission Queue (01h): Supported 00:11:46.143 Get Log Page (02h): Supported 00:11:46.143 Delete I/O Completion Queue (04h): Supported 00:11:46.143 Create I/O Completion Queue (05h): Supported 00:11:46.143 Identify (06h): Supported 00:11:46.143 Abort (08h): Supported 00:11:46.143 Set Features (09h): Supported 00:11:46.143 Get Features (0Ah): Supported 00:11:46.143 Asynchronous Event Request (0Ch): Supported 00:11:46.143 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.143 Directive Send (19h): Supported 00:11:46.143 Directive Receive (1Ah): Supported 00:11:46.143 Virtualization Management (1Ch): Supported 00:11:46.143 Doorbell Buffer Config (7Ch): Supported 00:11:46.143 Format NVM (80h): Supported LBA-Change 00:11:46.143 I/O Commands 00:11:46.143 ------------ 00:11:46.143 Flush (00h): Supported LBA-Change 00:11:46.143 Write (01h): Supported LBA-Change 00:11:46.143 Read (02h): Supported 00:11:46.143 Compare (05h): Supported 00:11:46.143 Write Zeroes (08h): Supported LBA-Change 00:11:46.143 Dataset Management (09h): Supported LBA-Change 00:11:46.143 Unknown (0Ch): Supported 00:11:46.143 Unknown (12h): Supported 00:11:46.143 Copy (19h): Supported LBA-Change 00:11:46.143 Unknown (1Dh): Supported LBA-Change 00:11:46.143 00:11:46.143 Error Log 00:11:46.143 ========= 00:11:46.143 00:11:46.143 Arbitration 00:11:46.143 =========== 00:11:46.143 Arbitration Burst: no limit 00:11:46.143 00:11:46.143 Power Management 00:11:46.143 ================ 00:11:46.143 Number of Power States: 1 00:11:46.143 Current Power State: Power State #0 00:11:46.143 Power State #0: 00:11:46.143 Max Power: 25.00 W 00:11:46.143 Non-Operational State: Operational 00:11:46.143 Entry Latency: 16 microseconds 00:11:46.143 Exit Latency: 4 microseconds 00:11:46.143 Relative Read Throughput: 0 00:11:46.143 Relative Read Latency: 0 00:11:46.143 Relative Write Throughput: 0 00:11:46.143 Relative Write Latency: 0 00:11:46.143 Idle Power: Not Reported 00:11:46.143 Active Power: Not Reported 00:11:46.143 Non-Operational Permissive Mode: Not Supported 00:11:46.143 00:11:46.143 Health Information 00:11:46.143 ================== 00:11:46.143 Critical Warnings: 00:11:46.143 Available Spare Space: OK 00:11:46.143 Temperature: OK 00:11:46.143 Device Reliability: OK 00:11:46.143 Read Only: No 00:11:46.143 Volatile Memory Backup: OK 00:11:46.143 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.143 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.143 Available Spare: 0% 00:11:46.143 Available Spare Threshold: 0% 00:11:46.143 Life Percentage Used: 0% 00:11:46.143 Data Units Read: 3500 00:11:46.143 Data Units Written: 1599 00:11:46.143 Host Read Commands: 167736 00:11:46.143 Host Write Commands: 82224 00:11:46.143 Controller Busy Time: 0 minutes 00:11:46.143 Power Cycles: 0 00:11:46.143 Power On Hours: 0 hours 00:11:46.143 Unsafe Shutdowns: 0 00:11:46.143 Unrecoverable Media Errors: 0 00:11:46.143 Lifetime Error Log Entries: 0 00:11:46.143 Warning Temperature Time: 0 minutes 00:11:46.143 Critical Temperature Time: 0 minutes 00:11:46.143 00:11:46.143 Number of Queues 00:11:46.143 ================ 00:11:46.143 Number of I/O Submission Queues: 64 00:11:46.143 Number of I/O Completion Queues: 64 00:11:46.143 00:11:46.143 ZNS Specific Controller Data 00:11:46.143 ============================ 00:11:46.143 Zone Append Size Limit: 0 00:11:46.143 00:11:46.143 00:11:46.143 Active Namespaces 00:11:46.143 ================= 00:11:46.143 Namespace ID:1 00:11:46.143 Error Recovery Timeout: Unlimited 00:11:46.143 Command Set Identifier: NVM (00h) 00:11:46.143 Deallocate: Supported 00:11:46.143 Deallocated/Unwritten Error: Supported 00:11:46.143 Deallocated Read Value: All 0x00 00:11:46.143 Deallocate in Write Zeroes: Not Supported 00:11:46.143 Deallocated Guard Field: 0xFFFF 00:11:46.143 Flush: Supported 00:11:46.143 Reservation: Not Supported 00:11:46.143 Namespace Sharing Capabilities: Private 00:11:46.143 Size (in LBAs): 1048576 (4GiB) 00:11:46.143 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.143 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.143 Thin Provisioning: Not Supported 00:11:46.143 Per-NS Atomic Units: No 00:11:46.143 Maximum Single Source Range Length: 128 00:11:46.143 Maximum Copy Length: 128 00:11:46.143 Maximum Source Range Count: 128 00:11:46.143 NGUID/EUI64 Never Reused: No 00:11:46.143 Namespace Write Protected: No 00:11:46.143 Number of LBA Formats: 8 00:11:46.143 Current LBA Format: LBA Format #04 00:11:46.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.143 00:11:46.143 Namespace ID:2 00:11:46.143 Error Recovery Timeout: Unlimited 00:11:46.143 Command Set Identifier: NVM (00h) 00:11:46.143 Deallocate: Supported 00:11:46.143 Deallocated/Unwritten Error: Supported 00:11:46.143 Deallocated Read Value: All 0x00 00:11:46.143 Deallocate in Write Zeroes: Not Supported 00:11:46.143 Deallocated Guard Field: 0xFFFF 00:11:46.143 Flush: Supported 00:11:46.143 Reservation: Not Supported 00:11:46.143 Namespace Sharing Capabilities: Private 00:11:46.143 Size (in LBAs): 1048576 (4GiB) 00:11:46.143 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.143 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.143 Thin Provisioning: Not Supported 00:11:46.143 Per-NS Atomic Units: No 00:11:46.143 Maximum Single Source Range Length: 128 00:11:46.143 Maximum Copy Length: 128 00:11:46.143 Maximum Source Range Count: 128 00:11:46.143 NGUID/EUI64 Never Reused: No 00:11:46.143 Namespace Write Protected: No 00:11:46.143 Number of LBA Formats: 8 00:11:46.143 Current LBA Format: LBA Format #04 00:11:46.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.143 00:11:46.143 Namespace ID:3 00:11:46.143 Error Recovery Timeout: Unlimited 00:11:46.143 Command Set Identifier: NVM (00h) 00:11:46.143 Deallocate: Supported 00:11:46.143 Deallocated/Unwritten Error: Supported 00:11:46.143 Deallocated Read Value: All 0x00 00:11:46.143 Deallocate in Write Zeroes: Not Supported 00:11:46.143 Deallocated Guard Field: 0xFFFF 00:11:46.143 Flush: Supported 00:11:46.143 Reservation: Not Supported 00:11:46.143 Namespace Sharing Capabilities: Private 00:11:46.143 Size (in LBAs): 1048576 (4GiB) 00:11:46.143 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.143 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.143 Thin Provisioning: Not Supported 00:11:46.143 Per-NS Atomic Units: No 00:11:46.143 Maximum Single Source Range Length: 128 00:11:46.143 Maximum Copy Length: 128 00:11:46.143 Maximum Source Range Count: 128 00:11:46.143 NGUID/EUI64 Never Reused: No 00:11:46.143 Namespace Write Protected: No 00:11:46.143 Number of LBA Formats: 8 00:11:46.143 Current LBA Format: LBA Format #04 00:11:46.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.143 00:11:46.144 05:09:05 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.144 05:09:05 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:46.404 ===================================================== 00:11:46.404 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:46.404 ===================================================== 00:11:46.404 Controller Capabilities/Features 00:11:46.404 ================================ 00:11:46.404 Vendor ID: 1b36 00:11:46.404 Subsystem Vendor ID: 1af4 00:11:46.404 Serial Number: 12340 00:11:46.404 Model Number: QEMU NVMe Ctrl 00:11:46.404 Firmware Version: 8.0.0 00:11:46.404 Recommended Arb Burst: 6 00:11:46.404 IEEE OUI Identifier: 00 54 52 00:11:46.404 Multi-path I/O 00:11:46.404 May have multiple subsystem ports: No 00:11:46.404 May have multiple controllers: No 00:11:46.404 Associated with SR-IOV VF: No 00:11:46.404 Max Data Transfer Size: 524288 00:11:46.404 Max Number of Namespaces: 256 00:11:46.404 Max Number of I/O Queues: 64 00:11:46.404 NVMe Specification Version (VS): 1.4 00:11:46.404 NVMe Specification Version (Identify): 1.4 00:11:46.404 Maximum Queue Entries: 2048 00:11:46.404 Contiguous Queues Required: Yes 00:11:46.404 Arbitration Mechanisms Supported 00:11:46.404 Weighted Round Robin: Not Supported 00:11:46.404 Vendor Specific: Not Supported 00:11:46.404 Reset Timeout: 7500 ms 00:11:46.404 Doorbell Stride: 4 bytes 00:11:46.404 NVM Subsystem Reset: Not Supported 00:11:46.404 Command Sets Supported 00:11:46.404 NVM Command Set: Supported 00:11:46.404 Boot Partition: Not Supported 00:11:46.404 Memory Page Size Minimum: 4096 bytes 00:11:46.404 Memory Page Size Maximum: 65536 bytes 00:11:46.404 Persistent Memory Region: Not Supported 00:11:46.404 Optional Asynchronous Events Supported 00:11:46.404 Namespace Attribute Notices: Supported 00:11:46.404 Firmware Activation Notices: Not Supported 00:11:46.404 ANA Change Notices: Not Supported 00:11:46.404 PLE Aggregate Log Change Notices: Not Supported 00:11:46.404 LBA Status Info Alert Notices: Not Supported 00:11:46.404 EGE Aggregate Log Change Notices: Not Supported 00:11:46.404 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.404 Zone Descriptor Change Notices: Not Supported 00:11:46.404 Discovery Log Change Notices: Not Supported 00:11:46.404 Controller Attributes 00:11:46.404 128-bit Host Identifier: Not Supported 00:11:46.404 Non-Operational Permissive Mode: Not Supported 00:11:46.404 NVM Sets: Not Supported 00:11:46.404 Read Recovery Levels: Not Supported 00:11:46.404 Endurance Groups: Not Supported 00:11:46.404 Predictable Latency Mode: Not Supported 00:11:46.404 Traffic Based Keep ALive: Not Supported 00:11:46.404 Namespace Granularity: Not Supported 00:11:46.404 SQ Associations: Not Supported 00:11:46.404 UUID List: Not Supported 00:11:46.404 Multi-Domain Subsystem: Not Supported 00:11:46.404 Fixed Capacity Management: Not Supported 00:11:46.404 Variable Capacity Management: Not Supported 00:11:46.404 Delete Endurance Group: Not Supported 00:11:46.404 Delete NVM Set: Not Supported 00:11:46.404 Extended LBA Formats Supported: Supported 00:11:46.404 Flexible Data Placement Supported: Not Supported 00:11:46.404 00:11:46.404 Controller Memory Buffer Support 00:11:46.404 ================================ 00:11:46.404 Supported: No 00:11:46.404 00:11:46.404 Persistent Memory Region Support 00:11:46.404 ================================ 00:11:46.404 Supported: No 00:11:46.404 00:11:46.404 Admin Command Set Attributes 00:11:46.404 ============================ 00:11:46.404 Security Send/Receive: Not Supported 00:11:46.404 Format NVM: Supported 00:11:46.404 Firmware Activate/Download: Not Supported 00:11:46.404 Namespace Management: Supported 00:11:46.404 Device Self-Test: Not Supported 00:11:46.404 Directives: Supported 00:11:46.404 NVMe-MI: Not Supported 00:11:46.404 Virtualization Management: Not Supported 00:11:46.404 Doorbell Buffer Config: Supported 00:11:46.404 Get LBA Status Capability: Not Supported 00:11:46.404 Command & Feature Lockdown Capability: Not Supported 00:11:46.404 Abort Command Limit: 4 00:11:46.404 Async Event Request Limit: 4 00:11:46.404 Number of Firmware Slots: N/A 00:11:46.404 Firmware Slot 1 Read-Only: N/A 00:11:46.404 Firmware Activation Without Reset: N/A 00:11:46.404 Multiple Update Detection Support: N/A 00:11:46.404 Firmware Update Granularity: No Information Provided 00:11:46.404 Per-Namespace SMART Log: Yes 00:11:46.404 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.404 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:46.404 Command Effects Log Page: Supported 00:11:46.404 Get Log Page Extended Data: Supported 00:11:46.404 Telemetry Log Pages: Not Supported 00:11:46.404 Persistent Event Log Pages: Not Supported 00:11:46.404 Supported Log Pages Log Page: May Support 00:11:46.404 Commands Supported & Effects Log Page: Not Supported 00:11:46.404 Feature Identifiers & Effects Log Page:May Support 00:11:46.404 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.404 Data Area 4 for Telemetry Log: Not Supported 00:11:46.404 Error Log Page Entries Supported: 1 00:11:46.404 Keep Alive: Not Supported 00:11:46.405 00:11:46.405 NVM Command Set Attributes 00:11:46.405 ========================== 00:11:46.405 Submission Queue Entry Size 00:11:46.405 Max: 64 00:11:46.405 Min: 64 00:11:46.405 Completion Queue Entry Size 00:11:46.405 Max: 16 00:11:46.405 Min: 16 00:11:46.405 Number of Namespaces: 256 00:11:46.405 Compare Command: Supported 00:11:46.405 Write Uncorrectable Command: Not Supported 00:11:46.405 Dataset Management Command: Supported 00:11:46.405 Write Zeroes Command: Supported 00:11:46.405 Set Features Save Field: Supported 00:11:46.405 Reservations: Not Supported 00:11:46.405 Timestamp: Supported 00:11:46.405 Copy: Supported 00:11:46.405 Volatile Write Cache: Present 00:11:46.405 Atomic Write Unit (Normal): 1 00:11:46.405 Atomic Write Unit (PFail): 1 00:11:46.405 Atomic Compare & Write Unit: 1 00:11:46.405 Fused Compare & Write: Not Supported 00:11:46.405 Scatter-Gather List 00:11:46.405 SGL Command Set: Supported 00:11:46.405 SGL Keyed: Not Supported 00:11:46.405 SGL Bit Bucket Descriptor: Not Supported 00:11:46.405 SGL Metadata Pointer: Not Supported 00:11:46.405 Oversized SGL: Not Supported 00:11:46.405 SGL Metadata Address: Not Supported 00:11:46.405 SGL Offset: Not Supported 00:11:46.405 Transport SGL Data Block: Not Supported 00:11:46.405 Replay Protected Memory Block: Not Supported 00:11:46.405 00:11:46.405 Firmware Slot Information 00:11:46.405 ========================= 00:11:46.405 Active slot: 1 00:11:46.405 Slot 1 Firmware Revision: 1.0 00:11:46.405 00:11:46.405 00:11:46.405 Commands Supported and Effects 00:11:46.405 ============================== 00:11:46.405 Admin Commands 00:11:46.405 -------------- 00:11:46.405 Delete I/O Submission Queue (00h): Supported 00:11:46.405 Create I/O Submission Queue (01h): Supported 00:11:46.405 Get Log Page (02h): Supported 00:11:46.405 Delete I/O Completion Queue (04h): Supported 00:11:46.405 Create I/O Completion Queue (05h): Supported 00:11:46.405 Identify (06h): Supported 00:11:46.405 Abort (08h): Supported 00:11:46.405 Set Features (09h): Supported 00:11:46.405 Get Features (0Ah): Supported 00:11:46.405 Asynchronous Event Request (0Ch): Supported 00:11:46.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.405 Directive Send (19h): Supported 00:11:46.405 Directive Receive (1Ah): Supported 00:11:46.405 Virtualization Management (1Ch): Supported 00:11:46.405 Doorbell Buffer Config (7Ch): Supported 00:11:46.405 Format NVM (80h): Supported LBA-Change 00:11:46.405 I/O Commands 00:11:46.405 ------------ 00:11:46.405 Flush (00h): Supported LBA-Change 00:11:46.405 Write (01h): Supported LBA-Change 00:11:46.405 Read (02h): Supported 00:11:46.405 Compare (05h): Supported 00:11:46.405 Write Zeroes (08h): Supported LBA-Change 00:11:46.405 Dataset Management (09h): Supported LBA-Change 00:11:46.405 Unknown (0Ch): Supported 00:11:46.405 Unknown (12h): Supported 00:11:46.405 Copy (19h): Supported LBA-Change 00:11:46.405 Unknown (1Dh): Supported LBA-Change 00:11:46.405 00:11:46.405 Error Log 00:11:46.405 ========= 00:11:46.405 00:11:46.405 Arbitration 00:11:46.405 =========== 00:11:46.405 Arbitration Burst: no limit 00:11:46.405 00:11:46.405 Power Management 00:11:46.405 ================ 00:11:46.405 Number of Power States: 1 00:11:46.405 Current Power State: Power State #0 00:11:46.405 Power State #0: 00:11:46.405 Max Power: 25.00 W 00:11:46.405 Non-Operational State: Operational 00:11:46.405 Entry Latency: 16 microseconds 00:11:46.405 Exit Latency: 4 microseconds 00:11:46.405 Relative Read Throughput: 0 00:11:46.405 Relative Read Latency: 0 00:11:46.405 Relative Write Throughput: 0 00:11:46.405 Relative Write Latency: 0 00:11:46.405 Idle Power: Not Reported 00:11:46.405 Active Power: Not Reported 00:11:46.405 Non-Operational Permissive Mode: Not Supported 00:11:46.405 00:11:46.405 Health Information 00:11:46.405 ================== 00:11:46.405 Critical Warnings: 00:11:46.405 Available Spare Space: OK 00:11:46.405 Temperature: OK 00:11:46.405 Device Reliability: OK 00:11:46.405 Read Only: No 00:11:46.405 Volatile Memory Backup: OK 00:11:46.405 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.405 Available Spare: 0% 00:11:46.405 Available Spare Threshold: 0% 00:11:46.405 Life Percentage Used: 0% 00:11:46.405 Data Units Read: 1580 00:11:46.405 Data Units Written: 710 00:11:46.405 Host Read Commands: 76234 00:11:46.405 Host Write Commands: 37687 00:11:46.405 Controller Busy Time: 0 minutes 00:11:46.405 Power Cycles: 0 00:11:46.405 Power On Hours: 0 hours 00:11:46.405 Unsafe Shutdowns: 0 00:11:46.405 Unrecoverable Media Errors: 0 00:11:46.405 Lifetime Error Log Entries: 0 00:11:46.405 Warning Temperature Time: 0 minutes 00:11:46.405 Critical Temperature Time: 0 minutes 00:11:46.405 00:11:46.405 Number of Queues 00:11:46.405 ================ 00:11:46.405 Number of I/O Submission Queues: 64 00:11:46.405 Number of I/O Completion Queues: 64 00:11:46.405 00:11:46.405 ZNS Specific Controller Data 00:11:46.405 ============================ 00:11:46.405 Zone Append Size Limit: 0 00:11:46.405 00:11:46.405 00:11:46.405 Active Namespaces 00:11:46.405 ================= 00:11:46.405 Namespace ID:1 00:11:46.405 Error Recovery Timeout: Unlimited 00:11:46.405 Command Set Identifier: NVM (00h) 00:11:46.405 Deallocate: Supported 00:11:46.405 Deallocated/Unwritten Error: Supported 00:11:46.405 Deallocated Read Value: All 0x00 00:11:46.405 Deallocate in Write Zeroes: Not Supported 00:11:46.405 Deallocated Guard Field: 0xFFFF 00:11:46.405 Flush: Supported 00:11:46.405 Reservation: Not Supported 00:11:46.405 Metadata Transferred as: Separate Metadata Buffer 00:11:46.405 Namespace Sharing Capabilities: Private 00:11:46.405 Size (in LBAs): 1548666 (5GiB) 00:11:46.405 Capacity (in LBAs): 1548666 (5GiB) 00:11:46.405 Utilization (in LBAs): 1548666 (5GiB) 00:11:46.405 Thin Provisioning: Not Supported 00:11:46.405 Per-NS Atomic Units: No 00:11:46.405 Maximum Single Source Range Length: 128 00:11:46.405 Maximum Copy Length: 128 00:11:46.405 Maximum Source Range Count: 128 00:11:46.405 NGUID/EUI64 Never Reused: No 00:11:46.405 Namespace Write Protected: No 00:11:46.405 Number of LBA Formats: 8 00:11:46.405 Current LBA Format: LBA Format #07 00:11:46.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.405 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.405 00:11:46.405 05:09:05 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.405 05:09:05 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:11:46.666 ===================================================== 00:11:46.666 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:46.666 ===================================================== 00:11:46.666 Controller Capabilities/Features 00:11:46.666 ================================ 00:11:46.666 Vendor ID: 1b36 00:11:46.666 Subsystem Vendor ID: 1af4 00:11:46.666 Serial Number: 12341 00:11:46.666 Model Number: QEMU NVMe Ctrl 00:11:46.666 Firmware Version: 8.0.0 00:11:46.666 Recommended Arb Burst: 6 00:11:46.666 IEEE OUI Identifier: 00 54 52 00:11:46.666 Multi-path I/O 00:11:46.666 May have multiple subsystem ports: No 00:11:46.666 May have multiple controllers: No 00:11:46.666 Associated with SR-IOV VF: No 00:11:46.666 Max Data Transfer Size: 524288 00:11:46.666 Max Number of Namespaces: 256 00:11:46.666 Max Number of I/O Queues: 64 00:11:46.666 NVMe Specification Version (VS): 1.4 00:11:46.666 NVMe Specification Version (Identify): 1.4 00:11:46.666 Maximum Queue Entries: 2048 00:11:46.666 Contiguous Queues Required: Yes 00:11:46.666 Arbitration Mechanisms Supported 00:11:46.666 Weighted Round Robin: Not Supported 00:11:46.666 Vendor Specific: Not Supported 00:11:46.666 Reset Timeout: 7500 ms 00:11:46.666 Doorbell Stride: 4 bytes 00:11:46.666 NVM Subsystem Reset: Not Supported 00:11:46.666 Command Sets Supported 00:11:46.666 NVM Command Set: Supported 00:11:46.666 Boot Partition: Not Supported 00:11:46.666 Memory Page Size Minimum: 4096 bytes 00:11:46.666 Memory Page Size Maximum: 65536 bytes 00:11:46.666 Persistent Memory Region: Not Supported 00:11:46.666 Optional Asynchronous Events Supported 00:11:46.666 Namespace Attribute Notices: Supported 00:11:46.666 Firmware Activation Notices: Not Supported 00:11:46.666 ANA Change Notices: Not Supported 00:11:46.666 PLE Aggregate Log Change Notices: Not Supported 00:11:46.666 LBA Status Info Alert Notices: Not Supported 00:11:46.666 EGE Aggregate Log Change Notices: Not Supported 00:11:46.666 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.666 Zone Descriptor Change Notices: Not Supported 00:11:46.666 Discovery Log Change Notices: Not Supported 00:11:46.666 Controller Attributes 00:11:46.666 128-bit Host Identifier: Not Supported 00:11:46.666 Non-Operational Permissive Mode: Not Supported 00:11:46.666 NVM Sets: Not Supported 00:11:46.666 Read Recovery Levels: Not Supported 00:11:46.666 Endurance Groups: Not Supported 00:11:46.666 Predictable Latency Mode: Not Supported 00:11:46.666 Traffic Based Keep ALive: Not Supported 00:11:46.666 Namespace Granularity: Not Supported 00:11:46.666 SQ Associations: Not Supported 00:11:46.666 UUID List: Not Supported 00:11:46.666 Multi-Domain Subsystem: Not Supported 00:11:46.666 Fixed Capacity Management: Not Supported 00:11:46.666 Variable Capacity Management: Not Supported 00:11:46.666 Delete Endurance Group: Not Supported 00:11:46.666 Delete NVM Set: Not Supported 00:11:46.666 Extended LBA Formats Supported: Supported 00:11:46.666 Flexible Data Placement Supported: Not Supported 00:11:46.666 00:11:46.666 Controller Memory Buffer Support 00:11:46.666 ================================ 00:11:46.666 Supported: No 00:11:46.666 00:11:46.666 Persistent Memory Region Support 00:11:46.666 ================================ 00:11:46.666 Supported: No 00:11:46.666 00:11:46.666 Admin Command Set Attributes 00:11:46.666 ============================ 00:11:46.666 Security Send/Receive: Not Supported 00:11:46.666 Format NVM: Supported 00:11:46.666 Firmware Activate/Download: Not Supported 00:11:46.666 Namespace Management: Supported 00:11:46.666 Device Self-Test: Not Supported 00:11:46.666 Directives: Supported 00:11:46.666 NVMe-MI: Not Supported 00:11:46.666 Virtualization Management: Not Supported 00:11:46.666 Doorbell Buffer Config: Supported 00:11:46.666 Get LBA Status Capability: Not Supported 00:11:46.666 Command & Feature Lockdown Capability: Not Supported 00:11:46.666 Abort Command Limit: 4 00:11:46.666 Async Event Request Limit: 4 00:11:46.666 Number of Firmware Slots: N/A 00:11:46.666 Firmware Slot 1 Read-Only: N/A 00:11:46.666 Firmware Activation Without Reset: N/A 00:11:46.666 Multiple Update Detection Support: N/A 00:11:46.666 Firmware Update Granularity: No Information Provided 00:11:46.666 Per-Namespace SMART Log: Yes 00:11:46.666 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.666 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:46.666 Command Effects Log Page: Supported 00:11:46.666 Get Log Page Extended Data: Supported 00:11:46.666 Telemetry Log Pages: Not Supported 00:11:46.666 Persistent Event Log Pages: Not Supported 00:11:46.666 Supported Log Pages Log Page: May Support 00:11:46.666 Commands Supported & Effects Log Page: Not Supported 00:11:46.666 Feature Identifiers & Effects Log Page:May Support 00:11:46.666 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.666 Data Area 4 for Telemetry Log: Not Supported 00:11:46.666 Error Log Page Entries Supported: 1 00:11:46.666 Keep Alive: Not Supported 00:11:46.666 00:11:46.666 NVM Command Set Attributes 00:11:46.666 ========================== 00:11:46.666 Submission Queue Entry Size 00:11:46.666 Max: 64 00:11:46.666 Min: 64 00:11:46.666 Completion Queue Entry Size 00:11:46.666 Max: 16 00:11:46.666 Min: 16 00:11:46.666 Number of Namespaces: 256 00:11:46.666 Compare Command: Supported 00:11:46.666 Write Uncorrectable Command: Not Supported 00:11:46.666 Dataset Management Command: Supported 00:11:46.666 Write Zeroes Command: Supported 00:11:46.666 Set Features Save Field: Supported 00:11:46.666 Reservations: Not Supported 00:11:46.666 Timestamp: Supported 00:11:46.666 Copy: Supported 00:11:46.666 Volatile Write Cache: Present 00:11:46.666 Atomic Write Unit (Normal): 1 00:11:46.666 Atomic Write Unit (PFail): 1 00:11:46.666 Atomic Compare & Write Unit: 1 00:11:46.666 Fused Compare & Write: Not Supported 00:11:46.666 Scatter-Gather List 00:11:46.666 SGL Command Set: Supported 00:11:46.666 SGL Keyed: Not Supported 00:11:46.666 SGL Bit Bucket Descriptor: Not Supported 00:11:46.666 SGL Metadata Pointer: Not Supported 00:11:46.666 Oversized SGL: Not Supported 00:11:46.666 SGL Metadata Address: Not Supported 00:11:46.666 SGL Offset: Not Supported 00:11:46.666 Transport SGL Data Block: Not Supported 00:11:46.666 Replay Protected Memory Block: Not Supported 00:11:46.666 00:11:46.666 Firmware Slot Information 00:11:46.666 ========================= 00:11:46.666 Active slot: 1 00:11:46.666 Slot 1 Firmware Revision: 1.0 00:11:46.666 00:11:46.666 00:11:46.666 Commands Supported and Effects 00:11:46.666 ============================== 00:11:46.666 Admin Commands 00:11:46.666 -------------- 00:11:46.666 Delete I/O Submission Queue (00h): Supported 00:11:46.666 Create I/O Submission Queue (01h): Supported 00:11:46.666 Get Log Page (02h): Supported 00:11:46.666 Delete I/O Completion Queue (04h): Supported 00:11:46.666 Create I/O Completion Queue (05h): Supported 00:11:46.666 Identify (06h): Supported 00:11:46.666 Abort (08h): Supported 00:11:46.666 Set Features (09h): Supported 00:11:46.666 Get Features (0Ah): Supported 00:11:46.666 Asynchronous Event Request (0Ch): Supported 00:11:46.666 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.666 Directive Send (19h): Supported 00:11:46.666 Directive Receive (1Ah): Supported 00:11:46.666 Virtualization Management (1Ch): Supported 00:11:46.666 Doorbell Buffer Config (7Ch): Supported 00:11:46.666 Format NVM (80h): Supported LBA-Change 00:11:46.666 I/O Commands 00:11:46.666 ------------ 00:11:46.666 Flush (00h): Supported LBA-Change 00:11:46.666 Write (01h): Supported LBA-Change 00:11:46.666 Read (02h): Supported 00:11:46.666 Compare (05h): Supported 00:11:46.666 Write Zeroes (08h): Supported LBA-Change 00:11:46.666 Dataset Management (09h): Supported LBA-Change 00:11:46.666 Unknown (0Ch): Supported 00:11:46.666 Unknown (12h): Supported 00:11:46.666 Copy (19h): Supported LBA-Change 00:11:46.667 Unknown (1Dh): Supported LBA-Change 00:11:46.667 00:11:46.667 Error Log 00:11:46.667 ========= 00:11:46.667 00:11:46.667 Arbitration 00:11:46.667 =========== 00:11:46.667 Arbitration Burst: no limit 00:11:46.667 00:11:46.667 Power Management 00:11:46.667 ================ 00:11:46.667 Number of Power States: 1 00:11:46.667 Current Power State: Power State #0 00:11:46.667 Power State #0: 00:11:46.667 Max Power: 25.00 W 00:11:46.667 Non-Operational State: Operational 00:11:46.667 Entry Latency: 16 microseconds 00:11:46.667 Exit Latency: 4 microseconds 00:11:46.667 Relative Read Throughput: 0 00:11:46.667 Relative Read Latency: 0 00:11:46.667 Relative Write Throughput: 0 00:11:46.667 Relative Write Latency: 0 00:11:46.667 Idle Power: Not Reported 00:11:46.667 Active Power: Not Reported 00:11:46.667 Non-Operational Permissive Mode: Not Supported 00:11:46.667 00:11:46.667 Health Information 00:11:46.667 ================== 00:11:46.667 Critical Warnings: 00:11:46.667 Available Spare Space: OK 00:11:46.667 Temperature: OK 00:11:46.667 Device Reliability: OK 00:11:46.667 Read Only: No 00:11:46.667 Volatile Memory Backup: OK 00:11:46.667 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.667 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.667 Available Spare: 0% 00:11:46.667 Available Spare Threshold: 0% 00:11:46.667 Life Percentage Used: 0% 00:11:46.667 Data Units Read: 1142 00:11:46.667 Data Units Written: 523 00:11:46.667 Host Read Commands: 55536 00:11:46.667 Host Write Commands: 27264 00:11:46.667 Controller Busy Time: 0 minutes 00:11:46.667 Power Cycles: 0 00:11:46.667 Power On Hours: 0 hours 00:11:46.667 Unsafe Shutdowns: 0 00:11:46.667 Unrecoverable Media Errors: 0 00:11:46.667 Lifetime Error Log Entries: 0 00:11:46.667 Warning Temperature Time: 0 minutes 00:11:46.667 Critical Temperature Time: 0 minutes 00:11:46.667 00:11:46.667 Number of Queues 00:11:46.667 ================ 00:11:46.667 Number of I/O Submission Queues: 64 00:11:46.667 Number of I/O Completion Queues: 64 00:11:46.667 00:11:46.667 ZNS Specific Controller Data 00:11:46.667 ============================ 00:11:46.667 Zone Append Size Limit: 0 00:11:46.667 00:11:46.667 00:11:46.667 Active Namespaces 00:11:46.667 ================= 00:11:46.667 Namespace ID:1 00:11:46.667 Error Recovery Timeout: Unlimited 00:11:46.667 Command Set Identifier: NVM (00h) 00:11:46.667 Deallocate: Supported 00:11:46.667 Deallocated/Unwritten Error: Supported 00:11:46.667 Deallocated Read Value: All 0x00 00:11:46.667 Deallocate in Write Zeroes: Not Supported 00:11:46.667 Deallocated Guard Field: 0xFFFF 00:11:46.667 Flush: Supported 00:11:46.667 Reservation: Not Supported 00:11:46.667 Namespace Sharing Capabilities: Private 00:11:46.667 Size (in LBAs): 1310720 (5GiB) 00:11:46.667 Capacity (in LBAs): 1310720 (5GiB) 00:11:46.667 Utilization (in LBAs): 1310720 (5GiB) 00:11:46.667 Thin Provisioning: Not Supported 00:11:46.667 Per-NS Atomic Units: No 00:11:46.667 Maximum Single Source Range Length: 128 00:11:46.667 Maximum Copy Length: 128 00:11:46.667 Maximum Source Range Count: 128 00:11:46.667 NGUID/EUI64 Never Reused: No 00:11:46.667 Namespace Write Protected: No 00:11:46.667 Number of LBA Formats: 8 00:11:46.667 Current LBA Format: LBA Format #04 00:11:46.667 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.667 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.667 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.667 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.667 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.667 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.667 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.667 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.667 00:11:46.667 05:09:05 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.667 05:09:05 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:11:46.927 ===================================================== 00:11:46.927 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:46.927 ===================================================== 00:11:46.927 Controller Capabilities/Features 00:11:46.927 ================================ 00:11:46.927 Vendor ID: 1b36 00:11:46.927 Subsystem Vendor ID: 1af4 00:11:46.927 Serial Number: 12342 00:11:46.927 Model Number: QEMU NVMe Ctrl 00:11:46.927 Firmware Version: 8.0.0 00:11:46.927 Recommended Arb Burst: 6 00:11:46.927 IEEE OUI Identifier: 00 54 52 00:11:46.927 Multi-path I/O 00:11:46.927 May have multiple subsystem ports: No 00:11:46.927 May have multiple controllers: No 00:11:46.927 Associated with SR-IOV VF: No 00:11:46.927 Max Data Transfer Size: 524288 00:11:46.927 Max Number of Namespaces: 256 00:11:46.927 Max Number of I/O Queues: 64 00:11:46.927 NVMe Specification Version (VS): 1.4 00:11:46.927 NVMe Specification Version (Identify): 1.4 00:11:46.927 Maximum Queue Entries: 2048 00:11:46.927 Contiguous Queues Required: Yes 00:11:46.927 Arbitration Mechanisms Supported 00:11:46.927 Weighted Round Robin: Not Supported 00:11:46.927 Vendor Specific: Not Supported 00:11:46.927 Reset Timeout: 7500 ms 00:11:46.927 Doorbell Stride: 4 bytes 00:11:46.927 NVM Subsystem Reset: Not Supported 00:11:46.927 Command Sets Supported 00:11:46.927 NVM Command Set: Supported 00:11:46.927 Boot Partition: Not Supported 00:11:46.927 Memory Page Size Minimum: 4096 bytes 00:11:46.927 Memory Page Size Maximum: 65536 bytes 00:11:46.928 Persistent Memory Region: Not Supported 00:11:46.928 Optional Asynchronous Events Supported 00:11:46.928 Namespace Attribute Notices: Supported 00:11:46.928 Firmware Activation Notices: Not Supported 00:11:46.928 ANA Change Notices: Not Supported 00:11:46.928 PLE Aggregate Log Change Notices: Not Supported 00:11:46.928 LBA Status Info Alert Notices: Not Supported 00:11:46.928 EGE Aggregate Log Change Notices: Not Supported 00:11:46.928 Normal NVM Subsystem Shutdown event: Not Supported 00:11:46.928 Zone Descriptor Change Notices: Not Supported 00:11:46.928 Discovery Log Change Notices: Not Supported 00:11:46.928 Controller Attributes 00:11:46.928 128-bit Host Identifier: Not Supported 00:11:46.928 Non-Operational Permissive Mode: Not Supported 00:11:46.928 NVM Sets: Not Supported 00:11:46.928 Read Recovery Levels: Not Supported 00:11:46.928 Endurance Groups: Not Supported 00:11:46.928 Predictable Latency Mode: Not Supported 00:11:46.928 Traffic Based Keep ALive: Not Supported 00:11:46.928 Namespace Granularity: Not Supported 00:11:46.928 SQ Associations: Not Supported 00:11:46.928 UUID List: Not Supported 00:11:46.928 Multi-Domain Subsystem: Not Supported 00:11:46.928 Fixed Capacity Management: Not Supported 00:11:46.928 Variable Capacity Management: Not Supported 00:11:46.928 Delete Endurance Group: Not Supported 00:11:46.928 Delete NVM Set: Not Supported 00:11:46.928 Extended LBA Formats Supported: Supported 00:11:46.928 Flexible Data Placement Supported: Not Supported 00:11:46.928 00:11:46.928 Controller Memory Buffer Support 00:11:46.928 ================================ 00:11:46.928 Supported: No 00:11:46.928 00:11:46.928 Persistent Memory Region Support 00:11:46.928 ================================ 00:11:46.928 Supported: No 00:11:46.928 00:11:46.928 Admin Command Set Attributes 00:11:46.928 ============================ 00:11:46.928 Security Send/Receive: Not Supported 00:11:46.928 Format NVM: Supported 00:11:46.928 Firmware Activate/Download: Not Supported 00:11:46.928 Namespace Management: Supported 00:11:46.928 Device Self-Test: Not Supported 00:11:46.928 Directives: Supported 00:11:46.928 NVMe-MI: Not Supported 00:11:46.928 Virtualization Management: Not Supported 00:11:46.928 Doorbell Buffer Config: Supported 00:11:46.928 Get LBA Status Capability: Not Supported 00:11:46.928 Command & Feature Lockdown Capability: Not Supported 00:11:46.928 Abort Command Limit: 4 00:11:46.928 Async Event Request Limit: 4 00:11:46.928 Number of Firmware Slots: N/A 00:11:46.928 Firmware Slot 1 Read-Only: N/A 00:11:46.928 Firmware Activation Without Reset: N/A 00:11:46.928 Multiple Update Detection Support: N/A 00:11:46.928 Firmware Update Granularity: No Information Provided 00:11:46.928 Per-Namespace SMART Log: Yes 00:11:46.928 Asymmetric Namespace Access Log Page: Not Supported 00:11:46.928 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:46.928 Command Effects Log Page: Supported 00:11:46.928 Get Log Page Extended Data: Supported 00:11:46.928 Telemetry Log Pages: Not Supported 00:11:46.928 Persistent Event Log Pages: Not Supported 00:11:46.928 Supported Log Pages Log Page: May Support 00:11:46.928 Commands Supported & Effects Log Page: Not Supported 00:11:46.928 Feature Identifiers & Effects Log Page:May Support 00:11:46.928 NVMe-MI Commands & Effects Log Page: May Support 00:11:46.928 Data Area 4 for Telemetry Log: Not Supported 00:11:46.928 Error Log Page Entries Supported: 1 00:11:46.928 Keep Alive: Not Supported 00:11:46.928 00:11:46.928 NVM Command Set Attributes 00:11:46.928 ========================== 00:11:46.928 Submission Queue Entry Size 00:11:46.928 Max: 64 00:11:46.928 Min: 64 00:11:46.928 Completion Queue Entry Size 00:11:46.928 Max: 16 00:11:46.928 Min: 16 00:11:46.928 Number of Namespaces: 256 00:11:46.928 Compare Command: Supported 00:11:46.928 Write Uncorrectable Command: Not Supported 00:11:46.928 Dataset Management Command: Supported 00:11:46.928 Write Zeroes Command: Supported 00:11:46.928 Set Features Save Field: Supported 00:11:46.928 Reservations: Not Supported 00:11:46.928 Timestamp: Supported 00:11:46.928 Copy: Supported 00:11:46.928 Volatile Write Cache: Present 00:11:46.928 Atomic Write Unit (Normal): 1 00:11:46.928 Atomic Write Unit (PFail): 1 00:11:46.928 Atomic Compare & Write Unit: 1 00:11:46.928 Fused Compare & Write: Not Supported 00:11:46.928 Scatter-Gather List 00:11:46.928 SGL Command Set: Supported 00:11:46.928 SGL Keyed: Not Supported 00:11:46.928 SGL Bit Bucket Descriptor: Not Supported 00:11:46.928 SGL Metadata Pointer: Not Supported 00:11:46.928 Oversized SGL: Not Supported 00:11:46.928 SGL Metadata Address: Not Supported 00:11:46.928 SGL Offset: Not Supported 00:11:46.928 Transport SGL Data Block: Not Supported 00:11:46.928 Replay Protected Memory Block: Not Supported 00:11:46.928 00:11:46.928 Firmware Slot Information 00:11:46.928 ========================= 00:11:46.928 Active slot: 1 00:11:46.928 Slot 1 Firmware Revision: 1.0 00:11:46.928 00:11:46.928 00:11:46.928 Commands Supported and Effects 00:11:46.928 ============================== 00:11:46.928 Admin Commands 00:11:46.928 -------------- 00:11:46.928 Delete I/O Submission Queue (00h): Supported 00:11:46.928 Create I/O Submission Queue (01h): Supported 00:11:46.928 Get Log Page (02h): Supported 00:11:46.928 Delete I/O Completion Queue (04h): Supported 00:11:46.928 Create I/O Completion Queue (05h): Supported 00:11:46.928 Identify (06h): Supported 00:11:46.928 Abort (08h): Supported 00:11:46.928 Set Features (09h): Supported 00:11:46.928 Get Features (0Ah): Supported 00:11:46.928 Asynchronous Event Request (0Ch): Supported 00:11:46.928 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:46.928 Directive Send (19h): Supported 00:11:46.928 Directive Receive (1Ah): Supported 00:11:46.928 Virtualization Management (1Ch): Supported 00:11:46.928 Doorbell Buffer Config (7Ch): Supported 00:11:46.928 Format NVM (80h): Supported LBA-Change 00:11:46.928 I/O Commands 00:11:46.928 ------------ 00:11:46.928 Flush (00h): Supported LBA-Change 00:11:46.928 Write (01h): Supported LBA-Change 00:11:46.928 Read (02h): Supported 00:11:46.928 Compare (05h): Supported 00:11:46.928 Write Zeroes (08h): Supported LBA-Change 00:11:46.928 Dataset Management (09h): Supported LBA-Change 00:11:46.928 Unknown (0Ch): Supported 00:11:46.928 Unknown (12h): Supported 00:11:46.928 Copy (19h): Supported LBA-Change 00:11:46.928 Unknown (1Dh): Supported LBA-Change 00:11:46.928 00:11:46.928 Error Log 00:11:46.928 ========= 00:11:46.928 00:11:46.928 Arbitration 00:11:46.928 =========== 00:11:46.928 Arbitration Burst: no limit 00:11:46.928 00:11:46.928 Power Management 00:11:46.928 ================ 00:11:46.928 Number of Power States: 1 00:11:46.928 Current Power State: Power State #0 00:11:46.928 Power State #0: 00:11:46.928 Max Power: 25.00 W 00:11:46.928 Non-Operational State: Operational 00:11:46.928 Entry Latency: 16 microseconds 00:11:46.928 Exit Latency: 4 microseconds 00:11:46.928 Relative Read Throughput: 0 00:11:46.928 Relative Read Latency: 0 00:11:46.928 Relative Write Throughput: 0 00:11:46.928 Relative Write Latency: 0 00:11:46.928 Idle Power: Not Reported 00:11:46.928 Active Power: Not Reported 00:11:46.928 Non-Operational Permissive Mode: Not Supported 00:11:46.928 00:11:46.928 Health Information 00:11:46.928 ================== 00:11:46.928 Critical Warnings: 00:11:46.928 Available Spare Space: OK 00:11:46.928 Temperature: OK 00:11:46.928 Device Reliability: OK 00:11:46.928 Read Only: No 00:11:46.928 Volatile Memory Backup: OK 00:11:46.928 Current Temperature: 323 Kelvin (50 Celsius) 00:11:46.928 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:46.928 Available Spare: 0% 00:11:46.928 Available Spare Threshold: 0% 00:11:46.928 Life Percentage Used: 0% 00:11:46.928 Data Units Read: 3500 00:11:46.928 Data Units Written: 1599 00:11:46.928 Host Read Commands: 167736 00:11:46.928 Host Write Commands: 82224 00:11:46.928 Controller Busy Time: 0 minutes 00:11:46.928 Power Cycles: 0 00:11:46.928 Power On Hours: 0 hours 00:11:46.928 Unsafe Shutdowns: 0 00:11:46.928 Unrecoverable Media Errors: 0 00:11:46.928 Lifetime Error Log Entries: 0 00:11:46.928 Warning Temperature Time: 0 minutes 00:11:46.928 Critical Temperature Time: 0 minutes 00:11:46.928 00:11:46.928 Number of Queues 00:11:46.928 ================ 00:11:46.928 Number of I/O Submission Queues: 64 00:11:46.929 Number of I/O Completion Queues: 64 00:11:46.929 00:11:46.929 ZNS Specific Controller Data 00:11:46.929 ============================ 00:11:46.929 Zone Append Size Limit: 0 00:11:46.929 00:11:46.929 00:11:46.929 Active Namespaces 00:11:46.929 ================= 00:11:46.929 Namespace ID:1 00:11:46.929 Error Recovery Timeout: Unlimited 00:11:46.929 Command Set Identifier: NVM (00h) 00:11:46.929 Deallocate: Supported 00:11:46.929 Deallocated/Unwritten Error: Supported 00:11:46.929 Deallocated Read Value: All 0x00 00:11:46.929 Deallocate in Write Zeroes: Not Supported 00:11:46.929 Deallocated Guard Field: 0xFFFF 00:11:46.929 Flush: Supported 00:11:46.929 Reservation: Not Supported 00:11:46.929 Namespace Sharing Capabilities: Private 00:11:46.929 Size (in LBAs): 1048576 (4GiB) 00:11:46.929 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.929 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.929 Thin Provisioning: Not Supported 00:11:46.929 Per-NS Atomic Units: No 00:11:46.929 Maximum Single Source Range Length: 128 00:11:46.929 Maximum Copy Length: 128 00:11:46.929 Maximum Source Range Count: 128 00:11:46.929 NGUID/EUI64 Never Reused: No 00:11:46.929 Namespace Write Protected: No 00:11:46.929 Number of LBA Formats: 8 00:11:46.929 Current LBA Format: LBA Format #04 00:11:46.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.929 00:11:46.929 Namespace ID:2 00:11:46.929 Error Recovery Timeout: Unlimited 00:11:46.929 Command Set Identifier: NVM (00h) 00:11:46.929 Deallocate: Supported 00:11:46.929 Deallocated/Unwritten Error: Supported 00:11:46.929 Deallocated Read Value: All 0x00 00:11:46.929 Deallocate in Write Zeroes: Not Supported 00:11:46.929 Deallocated Guard Field: 0xFFFF 00:11:46.929 Flush: Supported 00:11:46.929 Reservation: Not Supported 00:11:46.929 Namespace Sharing Capabilities: Private 00:11:46.929 Size (in LBAs): 1048576 (4GiB) 00:11:46.929 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.929 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.929 Thin Provisioning: Not Supported 00:11:46.929 Per-NS Atomic Units: No 00:11:46.929 Maximum Single Source Range Length: 128 00:11:46.929 Maximum Copy Length: 128 00:11:46.929 Maximum Source Range Count: 128 00:11:46.929 NGUID/EUI64 Never Reused: No 00:11:46.929 Namespace Write Protected: No 00:11:46.929 Number of LBA Formats: 8 00:11:46.929 Current LBA Format: LBA Format #04 00:11:46.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.929 00:11:46.929 Namespace ID:3 00:11:46.929 Error Recovery Timeout: Unlimited 00:11:46.929 Command Set Identifier: NVM (00h) 00:11:46.929 Deallocate: Supported 00:11:46.929 Deallocated/Unwritten Error: Supported 00:11:46.929 Deallocated Read Value: All 0x00 00:11:46.929 Deallocate in Write Zeroes: Not Supported 00:11:46.929 Deallocated Guard Field: 0xFFFF 00:11:46.929 Flush: Supported 00:11:46.929 Reservation: Not Supported 00:11:46.929 Namespace Sharing Capabilities: Private 00:11:46.929 Size (in LBAs): 1048576 (4GiB) 00:11:46.929 Capacity (in LBAs): 1048576 (4GiB) 00:11:46.929 Utilization (in LBAs): 1048576 (4GiB) 00:11:46.929 Thin Provisioning: Not Supported 00:11:46.929 Per-NS Atomic Units: No 00:11:46.929 Maximum Single Source Range Length: 128 00:11:46.929 Maximum Copy Length: 128 00:11:46.929 Maximum Source Range Count: 128 00:11:46.929 NGUID/EUI64 Never Reused: No 00:11:46.929 Namespace Write Protected: No 00:11:46.929 Number of LBA Formats: 8 00:11:46.929 Current LBA Format: LBA Format #04 00:11:46.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:46.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:46.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:46.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:46.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:46.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:46.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:46.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:46.929 00:11:46.929 05:09:05 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:46.929 05:09:05 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:11:47.190 ===================================================== 00:11:47.190 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:47.190 ===================================================== 00:11:47.190 Controller Capabilities/Features 00:11:47.190 ================================ 00:11:47.190 Vendor ID: 1b36 00:11:47.190 Subsystem Vendor ID: 1af4 00:11:47.190 Serial Number: 12343 00:11:47.190 Model Number: QEMU NVMe Ctrl 00:11:47.190 Firmware Version: 8.0.0 00:11:47.190 Recommended Arb Burst: 6 00:11:47.190 IEEE OUI Identifier: 00 54 52 00:11:47.190 Multi-path I/O 00:11:47.190 May have multiple subsystem ports: No 00:11:47.190 May have multiple controllers: Yes 00:11:47.190 Associated with SR-IOV VF: No 00:11:47.190 Max Data Transfer Size: 524288 00:11:47.190 Max Number of Namespaces: 256 00:11:47.190 Max Number of I/O Queues: 64 00:11:47.190 NVMe Specification Version (VS): 1.4 00:11:47.190 NVMe Specification Version (Identify): 1.4 00:11:47.190 Maximum Queue Entries: 2048 00:11:47.190 Contiguous Queues Required: Yes 00:11:47.190 Arbitration Mechanisms Supported 00:11:47.190 Weighted Round Robin: Not Supported 00:11:47.190 Vendor Specific: Not Supported 00:11:47.190 Reset Timeout: 7500 ms 00:11:47.190 Doorbell Stride: 4 bytes 00:11:47.190 NVM Subsystem Reset: Not Supported 00:11:47.190 Command Sets Supported 00:11:47.190 NVM Command Set: Supported 00:11:47.190 Boot Partition: Not Supported 00:11:47.190 Memory Page Size Minimum: 4096 bytes 00:11:47.190 Memory Page Size Maximum: 65536 bytes 00:11:47.190 Persistent Memory Region: Not Supported 00:11:47.190 Optional Asynchronous Events Supported 00:11:47.190 Namespace Attribute Notices: Supported 00:11:47.190 Firmware Activation Notices: Not Supported 00:11:47.190 ANA Change Notices: Not Supported 00:11:47.190 PLE Aggregate Log Change Notices: Not Supported 00:11:47.190 LBA Status Info Alert Notices: Not Supported 00:11:47.190 EGE Aggregate Log Change Notices: Not Supported 00:11:47.190 Normal NVM Subsystem Shutdown event: Not Supported 00:11:47.190 Zone Descriptor Change Notices: Not Supported 00:11:47.190 Discovery Log Change Notices: Not Supported 00:11:47.190 Controller Attributes 00:11:47.190 128-bit Host Identifier: Not Supported 00:11:47.190 Non-Operational Permissive Mode: Not Supported 00:11:47.190 NVM Sets: Not Supported 00:11:47.190 Read Recovery Levels: Not Supported 00:11:47.190 Endurance Groups: Supported 00:11:47.190 Predictable Latency Mode: Not Supported 00:11:47.190 Traffic Based Keep ALive: Not Supported 00:11:47.190 Namespace Granularity: Not Supported 00:11:47.190 SQ Associations: Not Supported 00:11:47.190 UUID List: Not Supported 00:11:47.190 Multi-Domain Subsystem: Not Supported 00:11:47.190 Fixed Capacity Management: Not Supported 00:11:47.190 Variable Capacity Management: Not Supported 00:11:47.190 Delete Endurance Group: Not Supported 00:11:47.190 Delete NVM Set: Not Supported 00:11:47.190 Extended LBA Formats Supported: Supported 00:11:47.190 Flexible Data Placement Supported: Supported 00:11:47.190 00:11:47.190 Controller Memory Buffer Support 00:11:47.190 ================================ 00:11:47.190 Supported: No 00:11:47.190 00:11:47.190 Persistent Memory Region Support 00:11:47.190 ================================ 00:11:47.190 Supported: No 00:11:47.190 00:11:47.190 Admin Command Set Attributes 00:11:47.190 ============================ 00:11:47.190 Security Send/Receive: Not Supported 00:11:47.190 Format NVM: Supported 00:11:47.190 Firmware Activate/Download: Not Supported 00:11:47.190 Namespace Management: Supported 00:11:47.190 Device Self-Test: Not Supported 00:11:47.190 Directives: Supported 00:11:47.190 NVMe-MI: Not Supported 00:11:47.190 Virtualization Management: Not Supported 00:11:47.190 Doorbell Buffer Config: Supported 00:11:47.190 Get LBA Status Capability: Not Supported 00:11:47.190 Command & Feature Lockdown Capability: Not Supported 00:11:47.190 Abort Command Limit: 4 00:11:47.190 Async Event Request Limit: 4 00:11:47.190 Number of Firmware Slots: N/A 00:11:47.190 Firmware Slot 1 Read-Only: N/A 00:11:47.190 Firmware Activation Without Reset: N/A 00:11:47.190 Multiple Update Detection Support: N/A 00:11:47.190 Firmware Update Granularity: No Information Provided 00:11:47.190 Per-Namespace SMART Log: Yes 00:11:47.190 Asymmetric Namespace Access Log Page: Not Supported 00:11:47.190 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:47.190 Command Effects Log Page: Supported 00:11:47.190 Get Log Page Extended Data: Supported 00:11:47.190 Telemetry Log Pages: Not Supported 00:11:47.190 Persistent Event Log Pages: Not Supported 00:11:47.190 Supported Log Pages Log Page: May Support 00:11:47.190 Commands Supported & Effects Log Page: Not Supported 00:11:47.190 Feature Identifiers & Effects Log Page:May Support 00:11:47.190 NVMe-MI Commands & Effects Log Page: May Support 00:11:47.190 Data Area 4 for Telemetry Log: Not Supported 00:11:47.190 Error Log Page Entries Supported: 1 00:11:47.190 Keep Alive: Not Supported 00:11:47.190 00:11:47.190 NVM Command Set Attributes 00:11:47.190 ========================== 00:11:47.190 Submission Queue Entry Size 00:11:47.190 Max: 64 00:11:47.190 Min: 64 00:11:47.190 Completion Queue Entry Size 00:11:47.190 Max: 16 00:11:47.190 Min: 16 00:11:47.190 Number of Namespaces: 256 00:11:47.190 Compare Command: Supported 00:11:47.190 Write Uncorrectable Command: Not Supported 00:11:47.190 Dataset Management Command: Supported 00:11:47.190 Write Zeroes Command: Supported 00:11:47.190 Set Features Save Field: Supported 00:11:47.190 Reservations: Not Supported 00:11:47.190 Timestamp: Supported 00:11:47.190 Copy: Supported 00:11:47.190 Volatile Write Cache: Present 00:11:47.190 Atomic Write Unit (Normal): 1 00:11:47.190 Atomic Write Unit (PFail): 1 00:11:47.190 Atomic Compare & Write Unit: 1 00:11:47.190 Fused Compare & Write: Not Supported 00:11:47.190 Scatter-Gather List 00:11:47.190 SGL Command Set: Supported 00:11:47.190 SGL Keyed: Not Supported 00:11:47.190 SGL Bit Bucket Descriptor: Not Supported 00:11:47.190 SGL Metadata Pointer: Not Supported 00:11:47.190 Oversized SGL: Not Supported 00:11:47.190 SGL Metadata Address: Not Supported 00:11:47.190 SGL Offset: Not Supported 00:11:47.190 Transport SGL Data Block: Not Supported 00:11:47.190 Replay Protected Memory Block: Not Supported 00:11:47.190 00:11:47.190 Firmware Slot Information 00:11:47.190 ========================= 00:11:47.190 Active slot: 1 00:11:47.190 Slot 1 Firmware Revision: 1.0 00:11:47.190 00:11:47.190 00:11:47.190 Commands Supported and Effects 00:11:47.190 ============================== 00:11:47.190 Admin Commands 00:11:47.190 -------------- 00:11:47.190 Delete I/O Submission Queue (00h): Supported 00:11:47.190 Create I/O Submission Queue (01h): Supported 00:11:47.190 Get Log Page (02h): Supported 00:11:47.190 Delete I/O Completion Queue (04h): Supported 00:11:47.190 Create I/O Completion Queue (05h): Supported 00:11:47.190 Identify (06h): Supported 00:11:47.190 Abort (08h): Supported 00:11:47.190 Set Features (09h): Supported 00:11:47.190 Get Features (0Ah): Supported 00:11:47.190 Asynchronous Event Request (0Ch): Supported 00:11:47.190 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:47.190 Directive Send (19h): Supported 00:11:47.190 Directive Receive (1Ah): Supported 00:11:47.190 Virtualization Management (1Ch): Supported 00:11:47.190 Doorbell Buffer Config (7Ch): Supported 00:11:47.190 Format NVM (80h): Supported LBA-Change 00:11:47.190 I/O Commands 00:11:47.190 ------------ 00:11:47.190 Flush (00h): Supported LBA-Change 00:11:47.190 Write (01h): Supported LBA-Change 00:11:47.190 Read (02h): Supported 00:11:47.190 Compare (05h): Supported 00:11:47.190 Write Zeroes (08h): Supported LBA-Change 00:11:47.190 Dataset Management (09h): Supported LBA-Change 00:11:47.190 Unknown (0Ch): Supported 00:11:47.190 Unknown (12h): Supported 00:11:47.190 Copy (19h): Supported LBA-Change 00:11:47.190 Unknown (1Dh): Supported LBA-Change 00:11:47.190 00:11:47.190 Error Log 00:11:47.190 ========= 00:11:47.190 00:11:47.190 Arbitration 00:11:47.190 =========== 00:11:47.190 Arbitration Burst: no limit 00:11:47.191 00:11:47.191 Power Management 00:11:47.191 ================ 00:11:47.191 Number of Power States: 1 00:11:47.191 Current Power State: Power State #0 00:11:47.191 Power State #0: 00:11:47.191 Max Power: 25.00 W 00:11:47.191 Non-Operational State: Operational 00:11:47.191 Entry Latency: 16 microseconds 00:11:47.191 Exit Latency: 4 microseconds 00:11:47.191 Relative Read Throughput: 0 00:11:47.191 Relative Read Latency: 0 00:11:47.191 Relative Write Throughput: 0 00:11:47.191 Relative Write Latency: 0 00:11:47.191 Idle Power: Not Reported 00:11:47.191 Active Power: Not Reported 00:11:47.191 Non-Operational Permissive Mode: Not Supported 00:11:47.191 00:11:47.191 Health Information 00:11:47.191 ================== 00:11:47.191 Critical Warnings: 00:11:47.191 Available Spare Space: OK 00:11:47.191 Temperature: OK 00:11:47.191 Device Reliability: OK 00:11:47.191 Read Only: No 00:11:47.191 Volatile Memory Backup: OK 00:11:47.191 Current Temperature: 323 Kelvin (50 Celsius) 00:11:47.191 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:47.191 Available Spare: 0% 00:11:47.191 Available Spare Threshold: 0% 00:11:47.191 Life Percentage Used: 0% 00:11:47.191 Data Units Read: 1187 00:11:47.191 Data Units Written: 547 00:11:47.191 Host Read Commands: 56002 00:11:47.191 Host Write Commands: 27501 00:11:47.191 Controller Busy Time: 0 minutes 00:11:47.191 Power Cycles: 0 00:11:47.191 Power On Hours: 0 hours 00:11:47.191 Unsafe Shutdowns: 0 00:11:47.191 Unrecoverable Media Errors: 0 00:11:47.191 Lifetime Error Log Entries: 0 00:11:47.191 Warning Temperature Time: 0 minutes 00:11:47.191 Critical Temperature Time: 0 minutes 00:11:47.191 00:11:47.191 Number of Queues 00:11:47.191 ================ 00:11:47.191 Number of I/O Submission Queues: 64 00:11:47.191 Number of I/O Completion Queues: 64 00:11:47.191 00:11:47.191 ZNS Specific Controller Data 00:11:47.191 ============================ 00:11:47.191 Zone Append Size Limit: 0 00:11:47.191 00:11:47.191 00:11:47.191 Active Namespaces 00:11:47.191 ================= 00:11:47.191 Namespace ID:1 00:11:47.191 Error Recovery Timeout: Unlimited 00:11:47.191 Command Set Identifier: NVM (00h) 00:11:47.191 Deallocate: Supported 00:11:47.191 Deallocated/Unwritten Error: Supported 00:11:47.191 Deallocated Read Value: All 0x00 00:11:47.191 Deallocate in Write Zeroes: Not Supported 00:11:47.191 Deallocated Guard Field: 0xFFFF 00:11:47.191 Flush: Supported 00:11:47.191 Reservation: Not Supported 00:11:47.191 Namespace Sharing Capabilities: Multiple Controllers 00:11:47.191 Size (in LBAs): 262144 (1GiB) 00:11:47.191 Capacity (in LBAs): 262144 (1GiB) 00:11:47.191 Utilization (in LBAs): 262144 (1GiB) 00:11:47.191 Thin Provisioning: Not Supported 00:11:47.191 Per-NS Atomic Units: No 00:11:47.191 Maximum Single Source Range Length: 128 00:11:47.191 Maximum Copy Length: 128 00:11:47.191 Maximum Source Range Count: 128 00:11:47.191 NGUID/EUI64 Never Reused: No 00:11:47.191 Namespace Write Protected: No 00:11:47.191 Endurance group ID: 1 00:11:47.191 Number of LBA Formats: 8 00:11:47.191 Current LBA Format: LBA Format #04 00:11:47.191 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:47.191 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:47.191 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:47.191 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:47.191 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:47.191 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:47.191 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:47.191 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:47.191 00:11:47.191 Get Feature FDP: 00:11:47.191 ================ 00:11:47.191 Enabled: Yes 00:11:47.191 FDP configuration index: 0 00:11:47.191 00:11:47.191 FDP configurations log page 00:11:47.191 =========================== 00:11:47.191 Number of FDP configurations: 1 00:11:47.191 Version: 0 00:11:47.191 Size: 112 00:11:47.191 FDP Configuration Descriptor: 0 00:11:47.191 Descriptor Size: 96 00:11:47.191 Reclaim Group Identifier format: 2 00:11:47.191 FDP Volatile Write Cache: Not Present 00:11:47.191 FDP Configuration: Valid 00:11:47.191 Vendor Specific Size: 0 00:11:47.191 Number of Reclaim Groups: 2 00:11:47.191 Number of Recalim Unit Handles: 8 00:11:47.191 Max Placement Identifiers: 128 00:11:47.191 Number of Namespaces Suppprted: 256 00:11:47.191 Reclaim unit Nominal Size: 6000000 bytes 00:11:47.191 Estimated Reclaim Unit Time Limit: Not Reported 00:11:47.191 RUH Desc #000: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #001: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #002: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #003: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #004: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #005: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #006: RUH Type: Initially Isolated 00:11:47.191 RUH Desc #007: RUH Type: Initially Isolated 00:11:47.191 00:11:47.191 FDP reclaim unit handle usage log page 00:11:47.191 ====================================== 00:11:47.191 Number of Reclaim Unit Handles: 8 00:11:47.191 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:47.191 RUH Usage Desc #001: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #002: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #003: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #004: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #005: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #006: RUH Attributes: Unused 00:11:47.191 RUH Usage Desc #007: RUH Attributes: Unused 00:11:47.191 00:11:47.191 FDP statistics log page 00:11:47.191 ======================= 00:11:47.191 Host bytes with metadata written: 373006336 00:11:47.191 Media bytes with metadata written: 373088256 00:11:47.191 Media bytes erased: 0 00:11:47.191 00:11:47.191 FDP events log page 00:11:47.191 =================== 00:11:47.191 Number of FDP events: 0 00:11:47.191 00:11:47.191 00:11:47.191 real 0m1.744s 00:11:47.191 user 0m0.673s 00:11:47.191 sys 0m0.858s 00:11:47.191 05:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.191 05:09:06 -- common/autotest_common.sh@10 -- # set +x 00:11:47.191 ************************************ 00:11:47.191 END TEST nvme_identify 00:11:47.191 ************************************ 00:11:47.450 05:09:06 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:47.450 05:09:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:47.450 05:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.450 05:09:06 -- common/autotest_common.sh@10 -- # set +x 00:11:47.450 ************************************ 00:11:47.450 START TEST nvme_perf 00:11:47.450 ************************************ 00:11:47.450 05:09:06 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:47.450 05:09:06 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:48.829 Initializing NVMe Controllers 00:11:48.829 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:48.829 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:48.829 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:48.829 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:48.829 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:48.829 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:48.829 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:48.829 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:48.829 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:48.829 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:48.829 Initialization complete. Launching workers. 00:11:48.829 ======================================================== 00:11:48.829 Latency(us) 00:11:48.829 Device Information : IOPS MiB/s Average min max 00:11:48.829 PCIE (0000:00:06.0) NSID 1 from core 0: 13303.67 155.90 9619.53 7756.58 44717.67 00:11:48.829 PCIE (0000:00:07.0) NSID 1 from core 0: 13303.67 155.90 9613.08 7821.61 43374.00 00:11:48.829 PCIE (0000:00:09.0) NSID 1 from core 0: 13303.67 155.90 9601.55 7793.66 42912.53 00:11:48.829 PCIE (0000:00:08.0) NSID 1 from core 0: 13303.67 155.90 9588.70 7931.90 41415.68 00:11:48.829 PCIE (0000:00:08.0) NSID 2 from core 0: 13303.67 155.90 9576.30 7953.92 39935.01 00:11:48.829 PCIE (0000:00:08.0) NSID 3 from core 0: 13431.59 157.40 9472.33 7919.50 28154.30 00:11:48.829 ======================================================== 00:11:48.829 Total : 79949.95 936.91 9578.41 7756.58 44717.67 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 7989.150us 00:11:48.829 10.00000% : 8301.227us 00:11:48.829 25.00000% : 8675.718us 00:11:48.829 50.00000% : 9299.870us 00:11:48.829 75.00000% : 9861.608us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11172.328us 00:11:48.829 98.00000% : 11921.310us 00:11:48.829 99.00000% : 12732.709us 00:11:48.829 99.50000% : 42692.023us 00:11:48.829 99.90000% : 44439.650us 00:11:48.829 99.99000% : 44689.310us 00:11:48.829 99.99900% : 44938.971us 00:11:48.829 99.99990% : 44938.971us 00:11:48.829 99.99999% : 44938.971us 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 8176.396us 00:11:48.829 10.00000% : 8426.057us 00:11:48.829 25.00000% : 8738.133us 00:11:48.829 50.00000% : 9237.455us 00:11:48.829 75.00000% : 9799.192us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11172.328us 00:11:48.829 98.00000% : 11796.480us 00:11:48.829 99.00000% : 13169.615us 00:11:48.829 99.50000% : 41443.718us 00:11:48.829 99.90000% : 43191.345us 00:11:48.829 99.99000% : 43441.006us 00:11:48.829 99.99900% : 43441.006us 00:11:48.829 99.99990% : 43441.006us 00:11:48.829 99.99999% : 43441.006us 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 8176.396us 00:11:48.829 10.00000% : 8426.057us 00:11:48.829 25.00000% : 8738.133us 00:11:48.829 50.00000% : 9237.455us 00:11:48.829 75.00000% : 9799.192us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11047.497us 00:11:48.829 98.00000% : 11671.650us 00:11:48.829 99.00000% : 12483.048us 00:11:48.829 99.50000% : 40944.396us 00:11:48.829 99.90000% : 42692.023us 00:11:48.829 99.99000% : 42941.684us 00:11:48.829 99.99900% : 42941.684us 00:11:48.829 99.99990% : 42941.684us 00:11:48.829 99.99999% : 42941.684us 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 8176.396us 00:11:48.829 10.00000% : 8426.057us 00:11:48.829 25.00000% : 8738.133us 00:11:48.829 50.00000% : 9237.455us 00:11:48.829 75.00000% : 9799.192us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11047.497us 00:11:48.829 98.00000% : 11734.065us 00:11:48.829 99.00000% : 12795.124us 00:11:48.829 99.50000% : 39446.430us 00:11:48.829 99.90000% : 41194.057us 00:11:48.829 99.99000% : 41443.718us 00:11:48.829 99.99900% : 41443.718us 00:11:48.829 99.99990% : 41443.718us 00:11:48.829 99.99999% : 41443.718us 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 8176.396us 00:11:48.829 10.00000% : 8426.057us 00:11:48.829 25.00000% : 8738.133us 00:11:48.829 50.00000% : 9237.455us 00:11:48.829 75.00000% : 9799.192us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11047.497us 00:11:48.829 98.00000% : 11734.065us 00:11:48.829 99.00000% : 13232.030us 00:11:48.829 99.50000% : 37948.465us 00:11:48.829 99.90000% : 39696.091us 00:11:48.829 99.99000% : 39945.752us 00:11:48.829 99.99900% : 39945.752us 00:11:48.829 99.99990% : 39945.752us 00:11:48.829 99.99999% : 39945.752us 00:11:48.829 00:11:48.829 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:48.829 ================================================================================= 00:11:48.829 1.00000% : 8176.396us 00:11:48.829 10.00000% : 8426.057us 00:11:48.829 25.00000% : 8738.133us 00:11:48.829 50.00000% : 9299.870us 00:11:48.829 75.00000% : 9799.192us 00:11:48.829 90.00000% : 10298.514us 00:11:48.829 95.00000% : 11109.912us 00:11:48.829 98.00000% : 11734.065us 00:11:48.829 99.00000% : 13294.446us 00:11:48.829 99.50000% : 26214.400us 00:11:48.829 99.90000% : 27837.196us 00:11:48.829 99.99000% : 28211.688us 00:11:48.829 99.99900% : 28211.688us 00:11:48.829 99.99990% : 28211.688us 00:11:48.829 99.99999% : 28211.688us 00:11:48.829 00:11:48.829 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:48.829 ============================================================================== 00:11:48.829 Range in us Cumulative IO count 00:11:48.829 7739.490 - 7770.697: 0.0150% ( 2) 00:11:48.829 7770.697 - 7801.905: 0.0300% ( 2) 00:11:48.829 7801.905 - 7833.112: 0.0901% ( 8) 00:11:48.829 7833.112 - 7864.320: 0.1427% ( 7) 00:11:48.829 7864.320 - 7895.528: 0.3305% ( 25) 00:11:48.829 7895.528 - 7926.735: 0.5559% ( 30) 00:11:48.829 7926.735 - 7957.943: 0.9165% ( 48) 00:11:48.829 7957.943 - 7989.150: 1.3597% ( 59) 00:11:48.829 7989.150 - 8051.566: 2.8170% ( 194) 00:11:48.829 8051.566 - 8113.981: 4.7852% ( 262) 00:11:48.830 8113.981 - 8176.396: 6.9862% ( 293) 00:11:48.830 8176.396 - 8238.811: 9.4276% ( 325) 00:11:48.830 8238.811 - 8301.227: 11.7338% ( 307) 00:11:48.830 8301.227 - 8363.642: 14.3029% ( 342) 00:11:48.830 8363.642 - 8426.057: 16.8945% ( 345) 00:11:48.830 8426.057 - 8488.472: 19.3284% ( 324) 00:11:48.830 8488.472 - 8550.888: 21.8600% ( 337) 00:11:48.830 8550.888 - 8613.303: 24.3239% ( 328) 00:11:48.830 8613.303 - 8675.718: 26.8855% ( 341) 00:11:48.830 8675.718 - 8738.133: 29.5147% ( 350) 00:11:48.830 8738.133 - 8800.549: 31.8960% ( 317) 00:11:48.830 8800.549 - 8862.964: 34.4126% ( 335) 00:11:48.830 8862.964 - 8925.379: 36.9591% ( 339) 00:11:48.830 8925.379 - 8987.794: 39.4757% ( 335) 00:11:48.830 8987.794 - 9050.210: 41.9697% ( 332) 00:11:48.830 9050.210 - 9112.625: 44.4712% ( 333) 00:11:48.830 9112.625 - 9175.040: 47.0102% ( 338) 00:11:48.830 9175.040 - 9237.455: 49.5343% ( 336) 00:11:48.830 9237.455 - 9299.870: 52.0057% ( 329) 00:11:48.830 9299.870 - 9362.286: 54.4922% ( 331) 00:11:48.830 9362.286 - 9424.701: 57.0312% ( 338) 00:11:48.830 9424.701 - 9487.116: 59.6529% ( 349) 00:11:48.830 9487.116 - 9549.531: 62.1770% ( 336) 00:11:48.830 9549.531 - 9611.947: 64.8062% ( 350) 00:11:48.830 9611.947 - 9674.362: 67.5331% ( 363) 00:11:48.830 9674.362 - 9736.777: 70.2299% ( 359) 00:11:48.830 9736.777 - 9799.192: 73.0168% ( 371) 00:11:48.830 9799.192 - 9861.608: 75.7662% ( 366) 00:11:48.830 9861.608 - 9924.023: 78.5081% ( 365) 00:11:48.830 9924.023 - 9986.438: 81.2124% ( 360) 00:11:48.830 9986.438 - 10048.853: 83.6914% ( 330) 00:11:48.830 10048.853 - 10111.269: 86.0427% ( 313) 00:11:48.830 10111.269 - 10173.684: 87.8380% ( 239) 00:11:48.830 10173.684 - 10236.099: 89.1376% ( 173) 00:11:48.830 10236.099 - 10298.514: 90.1142% ( 130) 00:11:48.830 10298.514 - 10360.930: 90.7602% ( 86) 00:11:48.830 10360.930 - 10423.345: 91.3086% ( 73) 00:11:48.830 10423.345 - 10485.760: 91.7668% ( 61) 00:11:48.830 10485.760 - 10548.175: 92.1499% ( 51) 00:11:48.830 10548.175 - 10610.590: 92.5105% ( 48) 00:11:48.830 10610.590 - 10673.006: 92.9087% ( 53) 00:11:48.830 10673.006 - 10735.421: 93.1490% ( 32) 00:11:48.830 10735.421 - 10797.836: 93.4044% ( 34) 00:11:48.830 10797.836 - 10860.251: 93.6899% ( 38) 00:11:48.830 10860.251 - 10922.667: 93.9754% ( 38) 00:11:48.830 10922.667 - 10985.082: 94.2383% ( 35) 00:11:48.830 10985.082 - 11047.497: 94.5012% ( 35) 00:11:48.830 11047.497 - 11109.912: 94.7867% ( 38) 00:11:48.830 11109.912 - 11172.328: 95.0571% ( 36) 00:11:48.830 11172.328 - 11234.743: 95.3425% ( 38) 00:11:48.830 11234.743 - 11297.158: 95.6205% ( 37) 00:11:48.830 11297.158 - 11359.573: 95.8609% ( 32) 00:11:48.830 11359.573 - 11421.989: 96.1538% ( 39) 00:11:48.830 11421.989 - 11484.404: 96.3717% ( 29) 00:11:48.830 11484.404 - 11546.819: 96.6271% ( 34) 00:11:48.830 11546.819 - 11609.234: 96.9126% ( 38) 00:11:48.830 11609.234 - 11671.650: 97.1529% ( 32) 00:11:48.830 11671.650 - 11734.065: 97.4008% ( 33) 00:11:48.830 11734.065 - 11796.480: 97.6412% ( 32) 00:11:48.830 11796.480 - 11858.895: 97.9041% ( 35) 00:11:48.830 11858.895 - 11921.310: 98.1445% ( 32) 00:11:48.830 11921.310 - 11983.726: 98.3699% ( 30) 00:11:48.830 11983.726 - 12046.141: 98.4976% ( 17) 00:11:48.830 12046.141 - 12108.556: 98.5802% ( 11) 00:11:48.830 12108.556 - 12170.971: 98.6629% ( 11) 00:11:48.830 12170.971 - 12233.387: 98.7154% ( 7) 00:11:48.830 12233.387 - 12295.802: 98.7455% ( 4) 00:11:48.830 12295.802 - 12358.217: 98.7831% ( 5) 00:11:48.830 12358.217 - 12420.632: 98.8206% ( 5) 00:11:48.830 12420.632 - 12483.048: 98.8582% ( 5) 00:11:48.830 12483.048 - 12545.463: 98.8957% ( 5) 00:11:48.830 12545.463 - 12607.878: 98.9408% ( 6) 00:11:48.830 12607.878 - 12670.293: 98.9709% ( 4) 00:11:48.830 12670.293 - 12732.709: 99.0009% ( 4) 00:11:48.830 12732.709 - 12795.124: 99.0234% ( 3) 00:11:48.830 12795.124 - 12857.539: 99.0385% ( 2) 00:11:48.830 40445.074 - 40694.735: 99.0685% ( 4) 00:11:48.830 40694.735 - 40944.396: 99.1211% ( 7) 00:11:48.830 40944.396 - 41194.057: 99.1812% ( 8) 00:11:48.830 41194.057 - 41443.718: 99.2338% ( 7) 00:11:48.830 41443.718 - 41693.379: 99.2864% ( 7) 00:11:48.830 41693.379 - 41943.040: 99.3465% ( 8) 00:11:48.830 41943.040 - 42192.701: 99.3990% ( 7) 00:11:48.830 42192.701 - 42442.362: 99.4516% ( 7) 00:11:48.830 42442.362 - 42692.023: 99.5042% ( 7) 00:11:48.830 42692.023 - 42941.684: 99.5643% ( 8) 00:11:48.830 42941.684 - 43191.345: 99.6244% ( 8) 00:11:48.830 43191.345 - 43441.006: 99.6845% ( 8) 00:11:48.830 43441.006 - 43690.667: 99.7446% ( 8) 00:11:48.830 43690.667 - 43940.328: 99.7972% ( 7) 00:11:48.830 43940.328 - 44189.989: 99.8723% ( 10) 00:11:48.830 44189.989 - 44439.650: 99.9249% ( 7) 00:11:48.830 44439.650 - 44689.310: 99.9925% ( 9) 00:11:48.830 44689.310 - 44938.971: 100.0000% ( 1) 00:11:48.830 00:11:48.830 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:48.830 ============================================================================== 00:11:48.830 Range in us Cumulative IO count 00:11:48.830 7801.905 - 7833.112: 0.0150% ( 2) 00:11:48.830 7833.112 - 7864.320: 0.0300% ( 2) 00:11:48.830 7864.320 - 7895.528: 0.0376% ( 1) 00:11:48.830 7895.528 - 7926.735: 0.0451% ( 1) 00:11:48.830 7926.735 - 7957.943: 0.0601% ( 2) 00:11:48.830 7957.943 - 7989.150: 0.1277% ( 9) 00:11:48.830 7989.150 - 8051.566: 0.3230% ( 26) 00:11:48.830 8051.566 - 8113.981: 0.7963% ( 63) 00:11:48.830 8113.981 - 8176.396: 1.8705% ( 143) 00:11:48.830 8176.396 - 8238.811: 3.6208% ( 233) 00:11:48.830 8238.811 - 8301.227: 6.0847% ( 328) 00:11:48.830 8301.227 - 8363.642: 8.7365% ( 353) 00:11:48.830 8363.642 - 8426.057: 11.7112% ( 396) 00:11:48.830 8426.057 - 8488.472: 14.5808% ( 382) 00:11:48.830 8488.472 - 8550.888: 17.5781% ( 399) 00:11:48.830 8550.888 - 8613.303: 20.5529% ( 396) 00:11:48.830 8613.303 - 8675.718: 23.6103% ( 407) 00:11:48.830 8675.718 - 8738.133: 26.5700% ( 394) 00:11:48.830 8738.133 - 8800.549: 29.5147% ( 392) 00:11:48.830 8800.549 - 8862.964: 32.3993% ( 384) 00:11:48.830 8862.964 - 8925.379: 35.4567% ( 407) 00:11:48.830 8925.379 - 8987.794: 38.4315% ( 396) 00:11:48.830 8987.794 - 9050.210: 41.4288% ( 399) 00:11:48.830 9050.210 - 9112.625: 44.4186% ( 398) 00:11:48.830 9112.625 - 9175.040: 47.4910% ( 409) 00:11:48.830 9175.040 - 9237.455: 50.4733% ( 397) 00:11:48.830 9237.455 - 9299.870: 53.4105% ( 391) 00:11:48.830 9299.870 - 9362.286: 56.4378% ( 403) 00:11:48.830 9362.286 - 9424.701: 59.4126% ( 396) 00:11:48.830 9424.701 - 9487.116: 62.4023% ( 398) 00:11:48.830 9487.116 - 9549.531: 65.4147% ( 401) 00:11:48.830 9549.531 - 9611.947: 68.3894% ( 396) 00:11:48.830 9611.947 - 9674.362: 71.3792% ( 398) 00:11:48.830 9674.362 - 9736.777: 74.3915% ( 401) 00:11:48.830 9736.777 - 9799.192: 77.4564% ( 408) 00:11:48.830 9799.192 - 9861.608: 80.3486% ( 385) 00:11:48.830 9861.608 - 9924.023: 83.0078% ( 354) 00:11:48.830 9924.023 - 9986.438: 85.3891% ( 317) 00:11:48.830 9986.438 - 10048.853: 87.0718% ( 224) 00:11:48.830 10048.853 - 10111.269: 88.3113% ( 165) 00:11:48.830 10111.269 - 10173.684: 89.1226% ( 108) 00:11:48.830 10173.684 - 10236.099: 89.7837% ( 88) 00:11:48.830 10236.099 - 10298.514: 90.3170% ( 71) 00:11:48.830 10298.514 - 10360.930: 90.7602% ( 59) 00:11:48.830 10360.930 - 10423.345: 91.1208% ( 48) 00:11:48.830 10423.345 - 10485.760: 91.4588% ( 45) 00:11:48.830 10485.760 - 10548.175: 91.8044% ( 46) 00:11:48.830 10548.175 - 10610.590: 92.1499% ( 46) 00:11:48.830 10610.590 - 10673.006: 92.5030% ( 47) 00:11:48.830 10673.006 - 10735.421: 92.8185% ( 42) 00:11:48.830 10735.421 - 10797.836: 93.1566% ( 45) 00:11:48.830 10797.836 - 10860.251: 93.4946% ( 45) 00:11:48.830 10860.251 - 10922.667: 93.8477% ( 47) 00:11:48.830 10922.667 - 10985.082: 94.1707% ( 43) 00:11:48.830 10985.082 - 11047.497: 94.5162% ( 46) 00:11:48.830 11047.497 - 11109.912: 94.8543% ( 45) 00:11:48.830 11109.912 - 11172.328: 95.1848% ( 44) 00:11:48.830 11172.328 - 11234.743: 95.5003% ( 42) 00:11:48.830 11234.743 - 11297.158: 95.7933% ( 39) 00:11:48.830 11297.158 - 11359.573: 96.1013% ( 41) 00:11:48.830 11359.573 - 11421.989: 96.4243% ( 43) 00:11:48.830 11421.989 - 11484.404: 96.7323% ( 41) 00:11:48.830 11484.404 - 11546.819: 97.0252% ( 39) 00:11:48.830 11546.819 - 11609.234: 97.3407% ( 42) 00:11:48.830 11609.234 - 11671.650: 97.6262% ( 38) 00:11:48.830 11671.650 - 11734.065: 97.8966% ( 36) 00:11:48.830 11734.065 - 11796.480: 98.1145% ( 29) 00:11:48.830 11796.480 - 11858.895: 98.2722% ( 21) 00:11:48.830 11858.895 - 11921.310: 98.3849% ( 15) 00:11:48.830 11921.310 - 11983.726: 98.4600% ( 10) 00:11:48.830 11983.726 - 12046.141: 98.5201% ( 8) 00:11:48.830 12046.141 - 12108.556: 98.5877% ( 9) 00:11:48.830 12108.556 - 12170.971: 98.6553% ( 9) 00:11:48.830 12170.971 - 12233.387: 98.7154% ( 8) 00:11:48.830 12233.387 - 12295.802: 98.7605% ( 6) 00:11:48.830 12295.802 - 12358.217: 98.7755% ( 2) 00:11:48.830 12358.217 - 12420.632: 98.7981% ( 3) 00:11:48.830 12420.632 - 12483.048: 98.8131% ( 2) 00:11:48.830 12483.048 - 12545.463: 98.8281% ( 2) 00:11:48.830 12545.463 - 12607.878: 98.8431% ( 2) 00:11:48.830 12607.878 - 12670.293: 98.8582% ( 2) 00:11:48.830 12670.293 - 12732.709: 98.8807% ( 3) 00:11:48.830 12732.709 - 12795.124: 98.8957% ( 2) 00:11:48.831 12795.124 - 12857.539: 98.9108% ( 2) 00:11:48.831 12857.539 - 12919.954: 98.9333% ( 3) 00:11:48.831 12919.954 - 12982.370: 98.9558% ( 3) 00:11:48.831 12982.370 - 13044.785: 98.9709% ( 2) 00:11:48.831 13044.785 - 13107.200: 98.9934% ( 3) 00:11:48.831 13107.200 - 13169.615: 99.0084% ( 2) 00:11:48.831 13169.615 - 13232.030: 99.0309% ( 3) 00:11:48.831 13232.030 - 13294.446: 99.0385% ( 1) 00:11:48.831 38947.109 - 39196.770: 99.0535% ( 2) 00:11:48.831 39196.770 - 39446.430: 99.1061% ( 7) 00:11:48.831 39446.430 - 39696.091: 99.1587% ( 7) 00:11:48.831 39696.091 - 39945.752: 99.2112% ( 7) 00:11:48.831 39945.752 - 40195.413: 99.2788% ( 9) 00:11:48.831 40195.413 - 40445.074: 99.3314% ( 7) 00:11:48.831 40445.074 - 40694.735: 99.3915% ( 8) 00:11:48.831 40694.735 - 40944.396: 99.4291% ( 5) 00:11:48.831 40944.396 - 41194.057: 99.4817% ( 7) 00:11:48.831 41194.057 - 41443.718: 99.5493% ( 9) 00:11:48.831 41443.718 - 41693.379: 99.5944% ( 6) 00:11:48.831 41693.379 - 41943.040: 99.6544% ( 8) 00:11:48.831 41943.040 - 42192.701: 99.7145% ( 8) 00:11:48.831 42192.701 - 42442.362: 99.7746% ( 8) 00:11:48.831 42442.362 - 42692.023: 99.8347% ( 8) 00:11:48.831 42692.023 - 42941.684: 99.8873% ( 7) 00:11:48.831 42941.684 - 43191.345: 99.9549% ( 9) 00:11:48.831 43191.345 - 43441.006: 100.0000% ( 6) 00:11:48.831 00:11:48.831 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:48.831 ============================================================================== 00:11:48.831 Range in us Cumulative IO count 00:11:48.831 7770.697 - 7801.905: 0.0075% ( 1) 00:11:48.831 7801.905 - 7833.112: 0.0225% ( 2) 00:11:48.831 7833.112 - 7864.320: 0.0376% ( 2) 00:11:48.831 7864.320 - 7895.528: 0.0526% ( 2) 00:11:48.831 7895.528 - 7926.735: 0.0676% ( 2) 00:11:48.831 7926.735 - 7957.943: 0.0751% ( 1) 00:11:48.831 7957.943 - 7989.150: 0.1202% ( 6) 00:11:48.831 7989.150 - 8051.566: 0.3606% ( 32) 00:11:48.831 8051.566 - 8113.981: 0.7812% ( 56) 00:11:48.831 8113.981 - 8176.396: 1.9381% ( 154) 00:11:48.831 8176.396 - 8238.811: 3.8912% ( 260) 00:11:48.831 8238.811 - 8301.227: 6.3777% ( 331) 00:11:48.831 8301.227 - 8363.642: 8.9543% ( 343) 00:11:48.831 8363.642 - 8426.057: 11.8540% ( 386) 00:11:48.831 8426.057 - 8488.472: 14.7461% ( 385) 00:11:48.831 8488.472 - 8550.888: 17.6608% ( 388) 00:11:48.831 8550.888 - 8613.303: 20.7106% ( 406) 00:11:48.831 8613.303 - 8675.718: 23.5953% ( 384) 00:11:48.831 8675.718 - 8738.133: 26.5550% ( 394) 00:11:48.831 8738.133 - 8800.549: 29.4772% ( 389) 00:11:48.831 8800.549 - 8862.964: 32.3918% ( 388) 00:11:48.831 8862.964 - 8925.379: 35.3591% ( 395) 00:11:48.831 8925.379 - 8987.794: 38.3188% ( 394) 00:11:48.831 8987.794 - 9050.210: 41.3011% ( 397) 00:11:48.831 9050.210 - 9112.625: 44.2233% ( 389) 00:11:48.831 9112.625 - 9175.040: 47.1529% ( 390) 00:11:48.831 9175.040 - 9237.455: 50.1427% ( 398) 00:11:48.831 9237.455 - 9299.870: 53.1100% ( 395) 00:11:48.831 9299.870 - 9362.286: 56.0772% ( 395) 00:11:48.831 9362.286 - 9424.701: 59.0971% ( 402) 00:11:48.831 9424.701 - 9487.116: 62.0643% ( 395) 00:11:48.831 9487.116 - 9549.531: 65.0916% ( 403) 00:11:48.831 9549.531 - 9611.947: 68.0814% ( 398) 00:11:48.831 9611.947 - 9674.362: 71.1163% ( 404) 00:11:48.831 9674.362 - 9736.777: 74.1962% ( 410) 00:11:48.831 9736.777 - 9799.192: 77.2611% ( 408) 00:11:48.831 9799.192 - 9861.608: 80.1983% ( 391) 00:11:48.831 9861.608 - 9924.023: 83.0003% ( 373) 00:11:48.831 9924.023 - 9986.438: 85.4041% ( 320) 00:11:48.831 9986.438 - 10048.853: 87.0643% ( 221) 00:11:48.831 10048.853 - 10111.269: 88.2812% ( 162) 00:11:48.831 10111.269 - 10173.684: 89.2127% ( 124) 00:11:48.831 10173.684 - 10236.099: 89.9189% ( 94) 00:11:48.831 10236.099 - 10298.514: 90.4898% ( 76) 00:11:48.831 10298.514 - 10360.930: 90.9706% ( 64) 00:11:48.831 10360.930 - 10423.345: 91.4138% ( 59) 00:11:48.831 10423.345 - 10485.760: 91.7819% ( 49) 00:11:48.831 10485.760 - 10548.175: 92.1499% ( 49) 00:11:48.831 10548.175 - 10610.590: 92.5105% ( 48) 00:11:48.831 10610.590 - 10673.006: 92.8786% ( 49) 00:11:48.831 10673.006 - 10735.421: 93.2392% ( 48) 00:11:48.831 10735.421 - 10797.836: 93.5922% ( 47) 00:11:48.831 10797.836 - 10860.251: 93.9378% ( 46) 00:11:48.831 10860.251 - 10922.667: 94.2834% ( 46) 00:11:48.831 10922.667 - 10985.082: 94.6439% ( 48) 00:11:48.831 10985.082 - 11047.497: 95.0045% ( 48) 00:11:48.831 11047.497 - 11109.912: 95.3501% ( 46) 00:11:48.831 11109.912 - 11172.328: 95.6956% ( 46) 00:11:48.831 11172.328 - 11234.743: 96.0036% ( 41) 00:11:48.831 11234.743 - 11297.158: 96.3266% ( 43) 00:11:48.831 11297.158 - 11359.573: 96.6421% ( 42) 00:11:48.831 11359.573 - 11421.989: 96.9501% ( 41) 00:11:48.831 11421.989 - 11484.404: 97.2806% ( 44) 00:11:48.831 11484.404 - 11546.819: 97.6112% ( 44) 00:11:48.831 11546.819 - 11609.234: 97.9117% ( 40) 00:11:48.831 11609.234 - 11671.650: 98.1596% ( 33) 00:11:48.831 11671.650 - 11734.065: 98.3398% ( 24) 00:11:48.831 11734.065 - 11796.480: 98.4826% ( 19) 00:11:48.831 11796.480 - 11858.895: 98.5877% ( 14) 00:11:48.831 11858.895 - 11921.310: 98.6478% ( 8) 00:11:48.831 11921.310 - 11983.726: 98.6929% ( 6) 00:11:48.831 11983.726 - 12046.141: 98.7380% ( 6) 00:11:48.831 12046.141 - 12108.556: 98.7831% ( 6) 00:11:48.831 12108.556 - 12170.971: 98.8206% ( 5) 00:11:48.831 12170.971 - 12233.387: 98.8582% ( 5) 00:11:48.831 12233.387 - 12295.802: 98.9032% ( 6) 00:11:48.831 12295.802 - 12358.217: 98.9333% ( 4) 00:11:48.831 12358.217 - 12420.632: 98.9709% ( 5) 00:11:48.831 12420.632 - 12483.048: 99.0084% ( 5) 00:11:48.831 12483.048 - 12545.463: 99.0309% ( 3) 00:11:48.831 12545.463 - 12607.878: 99.0385% ( 1) 00:11:48.831 38697.448 - 38947.109: 99.0685% ( 4) 00:11:48.831 38947.109 - 39196.770: 99.1286% ( 8) 00:11:48.831 39196.770 - 39446.430: 99.1887% ( 8) 00:11:48.831 39446.430 - 39696.091: 99.2488% ( 8) 00:11:48.831 39696.091 - 39945.752: 99.3014% ( 7) 00:11:48.831 39945.752 - 40195.413: 99.3615% ( 8) 00:11:48.831 40195.413 - 40445.074: 99.4141% ( 7) 00:11:48.831 40445.074 - 40694.735: 99.4742% ( 8) 00:11:48.831 40694.735 - 40944.396: 99.5343% ( 8) 00:11:48.831 40944.396 - 41194.057: 99.5944% ( 8) 00:11:48.831 41194.057 - 41443.718: 99.6544% ( 8) 00:11:48.831 41443.718 - 41693.379: 99.7145% ( 8) 00:11:48.831 41693.379 - 41943.040: 99.7671% ( 7) 00:11:48.831 41943.040 - 42192.701: 99.8272% ( 8) 00:11:48.831 42192.701 - 42442.362: 99.8873% ( 8) 00:11:48.831 42442.362 - 42692.023: 99.9399% ( 7) 00:11:48.831 42692.023 - 42941.684: 100.0000% ( 8) 00:11:48.831 00:11:48.831 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:48.831 ============================================================================== 00:11:48.831 Range in us Cumulative IO count 00:11:48.831 7926.735 - 7957.943: 0.0225% ( 3) 00:11:48.831 7957.943 - 7989.150: 0.0601% ( 5) 00:11:48.831 7989.150 - 8051.566: 0.2855% ( 30) 00:11:48.831 8051.566 - 8113.981: 0.7737% ( 65) 00:11:48.831 8113.981 - 8176.396: 1.9005% ( 150) 00:11:48.831 8176.396 - 8238.811: 3.7034% ( 240) 00:11:48.831 8238.811 - 8301.227: 6.2725% ( 342) 00:11:48.831 8301.227 - 8363.642: 8.8341% ( 341) 00:11:48.831 8363.642 - 8426.057: 11.7188% ( 384) 00:11:48.831 8426.057 - 8488.472: 14.5658% ( 379) 00:11:48.831 8488.472 - 8550.888: 17.5556% ( 398) 00:11:48.831 8550.888 - 8613.303: 20.5153% ( 394) 00:11:48.831 8613.303 - 8675.718: 23.4075% ( 385) 00:11:48.831 8675.718 - 8738.133: 26.3296% ( 389) 00:11:48.831 8738.133 - 8800.549: 29.3194% ( 398) 00:11:48.831 8800.549 - 8862.964: 32.3017% ( 397) 00:11:48.831 8862.964 - 8925.379: 35.2464% ( 392) 00:11:48.831 8925.379 - 8987.794: 38.1761% ( 390) 00:11:48.831 8987.794 - 9050.210: 41.1809% ( 400) 00:11:48.831 9050.210 - 9112.625: 44.1481% ( 395) 00:11:48.831 9112.625 - 9175.040: 47.1605% ( 401) 00:11:48.831 9175.040 - 9237.455: 50.0977% ( 391) 00:11:48.831 9237.455 - 9299.870: 53.1776% ( 410) 00:11:48.831 9299.870 - 9362.286: 56.2124% ( 404) 00:11:48.831 9362.286 - 9424.701: 59.1572% ( 392) 00:11:48.831 9424.701 - 9487.116: 62.2296% ( 409) 00:11:48.831 9487.116 - 9549.531: 65.2118% ( 397) 00:11:48.831 9549.531 - 9611.947: 68.2767% ( 408) 00:11:48.831 9611.947 - 9674.362: 71.3717% ( 412) 00:11:48.831 9674.362 - 9736.777: 74.4216% ( 406) 00:11:48.831 9736.777 - 9799.192: 77.5015% ( 410) 00:11:48.831 9799.192 - 9861.608: 80.5739% ( 409) 00:11:48.831 9861.608 - 9924.023: 83.3684% ( 372) 00:11:48.831 9924.023 - 9986.438: 85.7272% ( 314) 00:11:48.831 9986.438 - 10048.853: 87.2822% ( 207) 00:11:48.831 10048.853 - 10111.269: 88.4991% ( 162) 00:11:48.831 10111.269 - 10173.684: 89.2653% ( 102) 00:11:48.831 10173.684 - 10236.099: 89.8663% ( 80) 00:11:48.831 10236.099 - 10298.514: 90.3996% ( 71) 00:11:48.831 10298.514 - 10360.930: 90.8654% ( 62) 00:11:48.831 10360.930 - 10423.345: 91.2861% ( 56) 00:11:48.831 10423.345 - 10485.760: 91.6692% ( 51) 00:11:48.831 10485.760 - 10548.175: 92.1049% ( 58) 00:11:48.831 10548.175 - 10610.590: 92.4654% ( 48) 00:11:48.831 10610.590 - 10673.006: 92.8410% ( 50) 00:11:48.831 10673.006 - 10735.421: 93.2242% ( 51) 00:11:48.831 10735.421 - 10797.836: 93.5847% ( 48) 00:11:48.831 10797.836 - 10860.251: 93.9453% ( 48) 00:11:48.831 10860.251 - 10922.667: 94.3134% ( 49) 00:11:48.831 10922.667 - 10985.082: 94.7040% ( 52) 00:11:48.831 10985.082 - 11047.497: 95.0571% ( 47) 00:11:48.831 11047.497 - 11109.912: 95.4026% ( 46) 00:11:48.832 11109.912 - 11172.328: 95.7482% ( 46) 00:11:48.832 11172.328 - 11234.743: 96.0712% ( 43) 00:11:48.832 11234.743 - 11297.158: 96.3567% ( 38) 00:11:48.832 11297.158 - 11359.573: 96.6271% ( 36) 00:11:48.832 11359.573 - 11421.989: 96.9201% ( 39) 00:11:48.832 11421.989 - 11484.404: 97.1905% ( 36) 00:11:48.832 11484.404 - 11546.819: 97.4609% ( 36) 00:11:48.832 11546.819 - 11609.234: 97.7088% ( 33) 00:11:48.832 11609.234 - 11671.650: 97.9192% ( 28) 00:11:48.832 11671.650 - 11734.065: 98.1070% ( 25) 00:11:48.832 11734.065 - 11796.480: 98.2873% ( 24) 00:11:48.832 11796.480 - 11858.895: 98.4075% ( 16) 00:11:48.832 11858.895 - 11921.310: 98.4751% ( 9) 00:11:48.832 11921.310 - 11983.726: 98.5201% ( 6) 00:11:48.832 11983.726 - 12046.141: 98.5652% ( 6) 00:11:48.832 12046.141 - 12108.556: 98.6028% ( 5) 00:11:48.832 12108.556 - 12170.971: 98.6478% ( 6) 00:11:48.832 12170.971 - 12233.387: 98.7004% ( 7) 00:11:48.832 12233.387 - 12295.802: 98.7380% ( 5) 00:11:48.832 12295.802 - 12358.217: 98.7831% ( 6) 00:11:48.832 12358.217 - 12420.632: 98.8281% ( 6) 00:11:48.832 12420.632 - 12483.048: 98.8807% ( 7) 00:11:48.832 12483.048 - 12545.463: 98.9258% ( 6) 00:11:48.832 12545.463 - 12607.878: 98.9408% ( 2) 00:11:48.832 12607.878 - 12670.293: 98.9633% ( 3) 00:11:48.832 12670.293 - 12732.709: 98.9859% ( 3) 00:11:48.832 12732.709 - 12795.124: 99.0009% ( 2) 00:11:48.832 12795.124 - 12857.539: 99.0234% ( 3) 00:11:48.832 12857.539 - 12919.954: 99.0385% ( 2) 00:11:48.832 37199.482 - 37449.143: 99.0760% ( 5) 00:11:48.832 37449.143 - 37698.804: 99.1286% ( 7) 00:11:48.832 37698.804 - 37948.465: 99.1962% ( 9) 00:11:48.832 37948.465 - 38198.126: 99.2563% ( 8) 00:11:48.832 38198.126 - 38447.787: 99.3089% ( 7) 00:11:48.832 38447.787 - 38697.448: 99.3615% ( 7) 00:11:48.832 38697.448 - 38947.109: 99.4216% ( 8) 00:11:48.832 38947.109 - 39196.770: 99.4817% ( 8) 00:11:48.832 39196.770 - 39446.430: 99.5418% ( 8) 00:11:48.832 39446.430 - 39696.091: 99.6019% ( 8) 00:11:48.832 39696.091 - 39945.752: 99.6620% ( 8) 00:11:48.832 39945.752 - 40195.413: 99.7221% ( 8) 00:11:48.832 40195.413 - 40445.074: 99.7746% ( 7) 00:11:48.832 40445.074 - 40694.735: 99.8422% ( 9) 00:11:48.832 40694.735 - 40944.396: 99.8948% ( 7) 00:11:48.832 40944.396 - 41194.057: 99.9474% ( 7) 00:11:48.832 41194.057 - 41443.718: 100.0000% ( 7) 00:11:48.832 00:11:48.832 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:48.832 ============================================================================== 00:11:48.832 Range in us Cumulative IO count 00:11:48.832 7926.735 - 7957.943: 0.0150% ( 2) 00:11:48.832 7957.943 - 7989.150: 0.0526% ( 5) 00:11:48.832 7989.150 - 8051.566: 0.1878% ( 18) 00:11:48.832 8051.566 - 8113.981: 0.6535% ( 62) 00:11:48.832 8113.981 - 8176.396: 1.6902% ( 138) 00:11:48.832 8176.396 - 8238.811: 3.5607% ( 249) 00:11:48.832 8238.811 - 8301.227: 5.9345% ( 316) 00:11:48.832 8301.227 - 8363.642: 8.6313% ( 359) 00:11:48.832 8363.642 - 8426.057: 11.4784% ( 379) 00:11:48.832 8426.057 - 8488.472: 14.4606% ( 397) 00:11:48.832 8488.472 - 8550.888: 17.3978% ( 391) 00:11:48.832 8550.888 - 8613.303: 20.3350% ( 391) 00:11:48.832 8613.303 - 8675.718: 23.3173% ( 397) 00:11:48.832 8675.718 - 8738.133: 26.2770% ( 394) 00:11:48.832 8738.133 - 8800.549: 29.2142% ( 391) 00:11:48.832 8800.549 - 8862.964: 32.2115% ( 399) 00:11:48.832 8862.964 - 8925.379: 35.2239% ( 401) 00:11:48.832 8925.379 - 8987.794: 38.2737% ( 406) 00:11:48.832 8987.794 - 9050.210: 41.1809% ( 387) 00:11:48.832 9050.210 - 9112.625: 44.2007% ( 402) 00:11:48.832 9112.625 - 9175.040: 47.2431% ( 405) 00:11:48.832 9175.040 - 9237.455: 50.2855% ( 405) 00:11:48.832 9237.455 - 9299.870: 53.3504% ( 408) 00:11:48.832 9299.870 - 9362.286: 56.4078% ( 407) 00:11:48.832 9362.286 - 9424.701: 59.4501% ( 405) 00:11:48.832 9424.701 - 9487.116: 62.5075% ( 407) 00:11:48.832 9487.116 - 9549.531: 65.5724% ( 408) 00:11:48.832 9549.531 - 9611.947: 68.5998% ( 403) 00:11:48.832 9611.947 - 9674.362: 71.6797% ( 410) 00:11:48.832 9674.362 - 9736.777: 74.7446% ( 408) 00:11:48.832 9736.777 - 9799.192: 77.8245% ( 410) 00:11:48.832 9799.192 - 9861.608: 80.8669% ( 405) 00:11:48.832 9861.608 - 9924.023: 83.6088% ( 365) 00:11:48.832 9924.023 - 9986.438: 85.7948% ( 291) 00:11:48.832 9986.438 - 10048.853: 87.3648% ( 209) 00:11:48.832 10048.853 - 10111.269: 88.5066% ( 152) 00:11:48.832 10111.269 - 10173.684: 89.3104% ( 107) 00:11:48.832 10173.684 - 10236.099: 89.8963% ( 78) 00:11:48.832 10236.099 - 10298.514: 90.4297% ( 71) 00:11:48.832 10298.514 - 10360.930: 90.8729% ( 59) 00:11:48.832 10360.930 - 10423.345: 91.3011% ( 57) 00:11:48.832 10423.345 - 10485.760: 91.7142% ( 55) 00:11:48.832 10485.760 - 10548.175: 92.0898% ( 50) 00:11:48.832 10548.175 - 10610.590: 92.4730% ( 51) 00:11:48.832 10610.590 - 10673.006: 92.8335% ( 48) 00:11:48.832 10673.006 - 10735.421: 93.2091% ( 50) 00:11:48.832 10735.421 - 10797.836: 93.5622% ( 47) 00:11:48.832 10797.836 - 10860.251: 93.9078% ( 46) 00:11:48.832 10860.251 - 10922.667: 94.2984% ( 52) 00:11:48.832 10922.667 - 10985.082: 94.6514% ( 47) 00:11:48.832 10985.082 - 11047.497: 95.0270% ( 50) 00:11:48.832 11047.497 - 11109.912: 95.3726% ( 46) 00:11:48.832 11109.912 - 11172.328: 95.7031% ( 44) 00:11:48.832 11172.328 - 11234.743: 95.9811% ( 37) 00:11:48.832 11234.743 - 11297.158: 96.2590% ( 37) 00:11:48.832 11297.158 - 11359.573: 96.5520% ( 39) 00:11:48.832 11359.573 - 11421.989: 96.7999% ( 33) 00:11:48.832 11421.989 - 11484.404: 97.0778% ( 37) 00:11:48.832 11484.404 - 11546.819: 97.3483% ( 36) 00:11:48.832 11546.819 - 11609.234: 97.5962% ( 33) 00:11:48.832 11609.234 - 11671.650: 97.8290% ( 31) 00:11:48.832 11671.650 - 11734.065: 98.0093% ( 24) 00:11:48.832 11734.065 - 11796.480: 98.1370% ( 17) 00:11:48.832 11796.480 - 11858.895: 98.2046% ( 9) 00:11:48.832 11858.895 - 11921.310: 98.2873% ( 11) 00:11:48.832 11921.310 - 11983.726: 98.3624% ( 10) 00:11:48.832 11983.726 - 12046.141: 98.4300% ( 9) 00:11:48.832 12046.141 - 12108.556: 98.5126% ( 11) 00:11:48.832 12108.556 - 12170.971: 98.5427% ( 4) 00:11:48.832 12170.971 - 12233.387: 98.5877% ( 6) 00:11:48.832 12233.387 - 12295.802: 98.6403% ( 7) 00:11:48.832 12295.802 - 12358.217: 98.6779% ( 5) 00:11:48.832 12358.217 - 12420.632: 98.7230% ( 6) 00:11:48.832 12420.632 - 12483.048: 98.7680% ( 6) 00:11:48.832 12483.048 - 12545.463: 98.7906% ( 3) 00:11:48.832 12545.463 - 12607.878: 98.8056% ( 2) 00:11:48.832 12607.878 - 12670.293: 98.8281% ( 3) 00:11:48.832 12670.293 - 12732.709: 98.8507% ( 3) 00:11:48.832 12732.709 - 12795.124: 98.8732% ( 3) 00:11:48.832 12795.124 - 12857.539: 98.8882% ( 2) 00:11:48.832 12857.539 - 12919.954: 98.9108% ( 3) 00:11:48.832 12919.954 - 12982.370: 98.9333% ( 3) 00:11:48.832 12982.370 - 13044.785: 98.9483% ( 2) 00:11:48.832 13044.785 - 13107.200: 98.9709% ( 3) 00:11:48.832 13107.200 - 13169.615: 98.9859% ( 2) 00:11:48.832 13169.615 - 13232.030: 99.0084% ( 3) 00:11:48.832 13232.030 - 13294.446: 99.0309% ( 3) 00:11:48.832 13294.446 - 13356.861: 99.0385% ( 1) 00:11:48.832 35701.516 - 35951.177: 99.0610% ( 3) 00:11:48.832 35951.177 - 36200.838: 99.1136% ( 7) 00:11:48.832 36200.838 - 36450.499: 99.1737% ( 8) 00:11:48.832 36450.499 - 36700.160: 99.2338% ( 8) 00:11:48.832 36700.160 - 36949.821: 99.2939% ( 8) 00:11:48.832 36949.821 - 37199.482: 99.3465% ( 7) 00:11:48.832 37199.482 - 37449.143: 99.3990% ( 7) 00:11:48.832 37449.143 - 37698.804: 99.4666% ( 9) 00:11:48.832 37698.804 - 37948.465: 99.5267% ( 8) 00:11:48.832 37948.465 - 38198.126: 99.5793% ( 7) 00:11:48.832 38198.126 - 38447.787: 99.6394% ( 8) 00:11:48.832 38447.787 - 38697.448: 99.6995% ( 8) 00:11:48.832 38697.448 - 38947.109: 99.7596% ( 8) 00:11:48.832 38947.109 - 39196.770: 99.8197% ( 8) 00:11:48.832 39196.770 - 39446.430: 99.8798% ( 8) 00:11:48.832 39446.430 - 39696.091: 99.9399% ( 8) 00:11:48.832 39696.091 - 39945.752: 100.0000% ( 8) 00:11:48.832 00:11:48.832 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:48.832 ============================================================================== 00:11:48.832 Range in us Cumulative IO count 00:11:48.832 7895.528 - 7926.735: 0.0149% ( 2) 00:11:48.832 7926.735 - 7957.943: 0.0372% ( 3) 00:11:48.832 7957.943 - 7989.150: 0.0670% ( 4) 00:11:48.832 7989.150 - 8051.566: 0.2158% ( 20) 00:11:48.832 8051.566 - 8113.981: 0.7143% ( 67) 00:11:48.832 8113.981 - 8176.396: 1.6295% ( 123) 00:11:48.832 8176.396 - 8238.811: 3.6086% ( 266) 00:11:48.832 8238.811 - 8301.227: 5.8482% ( 301) 00:11:48.832 8301.227 - 8363.642: 8.5938% ( 369) 00:11:48.832 8363.642 - 8426.057: 11.3095% ( 365) 00:11:48.832 8426.057 - 8488.472: 14.2560% ( 396) 00:11:48.832 8488.472 - 8550.888: 17.2470% ( 402) 00:11:48.832 8550.888 - 8613.303: 20.2381% ( 402) 00:11:48.832 8613.303 - 8675.718: 23.1696% ( 394) 00:11:48.832 8675.718 - 8738.133: 26.1086% ( 395) 00:11:48.832 8738.133 - 8800.549: 28.9881% ( 387) 00:11:48.832 8800.549 - 8862.964: 31.8899% ( 390) 00:11:48.832 8862.964 - 8925.379: 34.8810% ( 402) 00:11:48.832 8925.379 - 8987.794: 37.8869% ( 404) 00:11:48.832 8987.794 - 9050.210: 40.8557% ( 399) 00:11:48.832 9050.210 - 9112.625: 43.9062% ( 410) 00:11:48.832 9112.625 - 9175.040: 46.8229% ( 392) 00:11:48.833 9175.040 - 9237.455: 49.8438% ( 406) 00:11:48.833 9237.455 - 9299.870: 52.8199% ( 400) 00:11:48.833 9299.870 - 9362.286: 55.8036% ( 401) 00:11:48.833 9362.286 - 9424.701: 58.9211% ( 419) 00:11:48.833 9424.701 - 9487.116: 61.9494% ( 407) 00:11:48.833 9487.116 - 9549.531: 65.0744% ( 420) 00:11:48.833 9549.531 - 9611.947: 68.1027% ( 407) 00:11:48.833 9611.947 - 9674.362: 71.1607% ( 411) 00:11:48.833 9674.362 - 9736.777: 74.1592% ( 403) 00:11:48.833 9736.777 - 9799.192: 77.2768% ( 419) 00:11:48.833 9799.192 - 9861.608: 80.2307% ( 397) 00:11:48.833 9861.608 - 9924.023: 82.9464% ( 365) 00:11:48.833 9924.023 - 9986.438: 85.1637% ( 298) 00:11:48.833 9986.438 - 10048.853: 86.8378% ( 225) 00:11:48.833 10048.853 - 10111.269: 88.0432% ( 162) 00:11:48.833 10111.269 - 10173.684: 88.9881% ( 127) 00:11:48.833 10173.684 - 10236.099: 89.5833% ( 80) 00:11:48.833 10236.099 - 10298.514: 90.0670% ( 65) 00:11:48.833 10298.514 - 10360.930: 90.5283% ( 62) 00:11:48.833 10360.930 - 10423.345: 90.9896% ( 62) 00:11:48.833 10423.345 - 10485.760: 91.4286% ( 59) 00:11:48.833 10485.760 - 10548.175: 91.8229% ( 53) 00:11:48.833 10548.175 - 10610.590: 92.2173% ( 53) 00:11:48.833 10610.590 - 10673.006: 92.6414% ( 57) 00:11:48.833 10673.006 - 10735.421: 93.0580% ( 56) 00:11:48.833 10735.421 - 10797.836: 93.4375% ( 51) 00:11:48.833 10797.836 - 10860.251: 93.7946% ( 48) 00:11:48.833 10860.251 - 10922.667: 94.1592% ( 49) 00:11:48.833 10922.667 - 10985.082: 94.5089% ( 47) 00:11:48.833 10985.082 - 11047.497: 94.8735% ( 49) 00:11:48.833 11047.497 - 11109.912: 95.2530% ( 51) 00:11:48.833 11109.912 - 11172.328: 95.5804% ( 44) 00:11:48.833 11172.328 - 11234.743: 95.8780% ( 40) 00:11:48.833 11234.743 - 11297.158: 96.1682% ( 39) 00:11:48.833 11297.158 - 11359.573: 96.4658% ( 40) 00:11:48.833 11359.573 - 11421.989: 96.7857% ( 43) 00:11:48.833 11421.989 - 11484.404: 97.0982% ( 42) 00:11:48.833 11484.404 - 11546.819: 97.3735% ( 37) 00:11:48.833 11546.819 - 11609.234: 97.6488% ( 37) 00:11:48.833 11609.234 - 11671.650: 97.8869% ( 32) 00:11:48.833 11671.650 - 11734.065: 98.0432% ( 21) 00:11:48.833 11734.065 - 11796.480: 98.1920% ( 20) 00:11:48.833 11796.480 - 11858.895: 98.2664% ( 10) 00:11:48.833 11858.895 - 11921.310: 98.3408% ( 10) 00:11:48.833 11921.310 - 11983.726: 98.4226% ( 11) 00:11:48.833 11983.726 - 12046.141: 98.4673% ( 6) 00:11:48.833 12046.141 - 12108.556: 98.5119% ( 6) 00:11:48.833 12108.556 - 12170.971: 98.5491% ( 5) 00:11:48.833 12170.971 - 12233.387: 98.5938% ( 6) 00:11:48.833 12233.387 - 12295.802: 98.6384% ( 6) 00:11:48.833 12295.802 - 12358.217: 98.6756% ( 5) 00:11:48.833 12358.217 - 12420.632: 98.7202% ( 6) 00:11:48.833 12420.632 - 12483.048: 98.7574% ( 5) 00:11:48.833 12483.048 - 12545.463: 98.7798% ( 3) 00:11:48.833 12545.463 - 12607.878: 98.8021% ( 3) 00:11:48.833 12607.878 - 12670.293: 98.8170% ( 2) 00:11:48.833 12670.293 - 12732.709: 98.8393% ( 3) 00:11:48.833 12732.709 - 12795.124: 98.8542% ( 2) 00:11:48.833 12795.124 - 12857.539: 98.8690% ( 2) 00:11:48.833 12857.539 - 12919.954: 98.8914% ( 3) 00:11:48.833 12919.954 - 12982.370: 98.9137% ( 3) 00:11:48.833 12982.370 - 13044.785: 98.9360% ( 3) 00:11:48.833 13044.785 - 13107.200: 98.9509% ( 2) 00:11:48.833 13107.200 - 13169.615: 98.9732% ( 3) 00:11:48.833 13169.615 - 13232.030: 98.9955% ( 3) 00:11:48.833 13232.030 - 13294.446: 99.0104% ( 2) 00:11:48.833 13294.446 - 13356.861: 99.0327% ( 3) 00:11:48.833 13356.861 - 13419.276: 99.0476% ( 2) 00:11:48.833 24341.943 - 24466.773: 99.0774% ( 4) 00:11:48.833 24466.773 - 24591.604: 99.1146% ( 5) 00:11:48.833 24591.604 - 24716.434: 99.1443% ( 4) 00:11:48.833 24716.434 - 24841.265: 99.1741% ( 4) 00:11:48.833 24841.265 - 24966.095: 99.2113% ( 5) 00:11:48.833 24966.095 - 25090.926: 99.2411% ( 4) 00:11:48.833 25090.926 - 25215.756: 99.2708% ( 4) 00:11:48.833 25215.756 - 25340.587: 99.3006% ( 4) 00:11:48.833 25340.587 - 25465.417: 99.3304% ( 4) 00:11:48.833 25465.417 - 25590.248: 99.3601% ( 4) 00:11:48.833 25590.248 - 25715.078: 99.3899% ( 4) 00:11:48.833 25715.078 - 25839.909: 99.4196% ( 4) 00:11:48.833 25839.909 - 25964.739: 99.4568% ( 5) 00:11:48.833 25964.739 - 26089.570: 99.4866% ( 4) 00:11:48.833 26089.570 - 26214.400: 99.5164% ( 4) 00:11:48.833 26214.400 - 26339.230: 99.5461% ( 4) 00:11:48.833 26339.230 - 26464.061: 99.5833% ( 5) 00:11:48.833 26464.061 - 26588.891: 99.6131% ( 4) 00:11:48.833 26588.891 - 26713.722: 99.6503% ( 5) 00:11:48.833 26713.722 - 26838.552: 99.6801% ( 4) 00:11:48.833 26838.552 - 26963.383: 99.7098% ( 4) 00:11:48.833 26963.383 - 27088.213: 99.7396% ( 4) 00:11:48.833 27088.213 - 27213.044: 99.7693% ( 4) 00:11:48.833 27213.044 - 27337.874: 99.7991% ( 4) 00:11:48.833 27337.874 - 27462.705: 99.8289% ( 4) 00:11:48.833 27462.705 - 27587.535: 99.8586% ( 4) 00:11:48.833 27587.535 - 27712.366: 99.8884% ( 4) 00:11:48.833 27712.366 - 27837.196: 99.9182% ( 4) 00:11:48.833 27837.196 - 27962.027: 99.9479% ( 4) 00:11:48.833 27962.027 - 28086.857: 99.9777% ( 4) 00:11:48.833 28086.857 - 28211.688: 100.0000% ( 3) 00:11:48.833 00:11:48.833 05:09:07 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:50.211 Initializing NVMe Controllers 00:11:50.211 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:50.211 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:11:50.211 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:11:50.211 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:11:50.211 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:50.211 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:11:50.211 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:11:50.211 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:11:50.211 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:11:50.211 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:11:50.211 Initialization complete. Launching workers. 00:11:50.211 ======================================================== 00:11:50.211 Latency(us) 00:11:50.211 Device Information : IOPS MiB/s Average min max 00:11:50.211 PCIE (0000:00:06.0) NSID 1 from core 0: 12508.85 146.59 10229.04 6922.78 30040.01 00:11:50.211 PCIE (0000:00:07.0) NSID 1 from core 0: 12508.85 146.59 10223.47 7459.01 27751.40 00:11:50.211 PCIE (0000:00:09.0) NSID 1 from core 0: 12508.85 146.59 10216.75 7133.01 26764.20 00:11:50.211 PCIE (0000:00:08.0) NSID 1 from core 0: 12508.85 146.59 10210.10 7344.81 25678.89 00:11:50.211 PCIE (0000:00:08.0) NSID 2 from core 0: 12508.85 146.59 10198.97 7642.50 24584.94 00:11:50.211 PCIE (0000:00:08.0) NSID 3 from core 0: 12508.85 146.59 10185.58 7372.81 23030.49 00:11:50.211 ======================================================== 00:11:50.211 Total : 75053.10 879.53 10210.65 6922.78 30040.01 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 7926.735us 00:11:50.211 10.00000% : 8675.718us 00:11:50.211 25.00000% : 9175.040us 00:11:50.211 50.00000% : 9924.023us 00:11:50.211 75.00000% : 10797.836us 00:11:50.211 90.00000% : 12046.141us 00:11:50.211 95.00000% : 12732.709us 00:11:50.211 98.00000% : 13668.937us 00:11:50.211 99.00000% : 26339.230us 00:11:50.211 99.50000% : 27712.366us 00:11:50.211 99.90000% : 29210.331us 00:11:50.211 99.99000% : 30084.145us 00:11:50.211 99.99900% : 30084.145us 00:11:50.211 99.99990% : 30084.145us 00:11:50.211 99.99999% : 30084.145us 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 8113.981us 00:11:50.211 10.00000% : 8925.379us 00:11:50.211 25.00000% : 9237.455us 00:11:50.211 50.00000% : 9799.192us 00:11:50.211 75.00000% : 10735.421us 00:11:50.211 90.00000% : 11983.726us 00:11:50.211 95.00000% : 12607.878us 00:11:50.211 98.00000% : 13481.691us 00:11:50.211 99.00000% : 25839.909us 00:11:50.211 99.50000% : 26713.722us 00:11:50.211 99.90000% : 27213.044us 00:11:50.211 99.99000% : 27337.874us 00:11:50.211 99.99900% : 27837.196us 00:11:50.211 99.99990% : 27837.196us 00:11:50.211 99.99999% : 27837.196us 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 7833.112us 00:11:50.211 10.00000% : 8738.133us 00:11:50.211 25.00000% : 9299.870us 00:11:50.211 50.00000% : 9861.608us 00:11:50.211 75.00000% : 10860.251us 00:11:50.211 90.00000% : 11921.310us 00:11:50.211 95.00000% : 12607.878us 00:11:50.211 98.00000% : 13419.276us 00:11:50.211 99.00000% : 24092.282us 00:11:50.211 99.50000% : 24966.095us 00:11:50.211 99.90000% : 26339.230us 00:11:50.211 99.99000% : 26838.552us 00:11:50.211 99.99900% : 26838.552us 00:11:50.211 99.99990% : 26838.552us 00:11:50.211 99.99999% : 26838.552us 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 8051.566us 00:11:50.211 10.00000% : 8800.549us 00:11:50.211 25.00000% : 9237.455us 00:11:50.211 50.00000% : 9861.608us 00:11:50.211 75.00000% : 10860.251us 00:11:50.211 90.00000% : 11983.726us 00:11:50.211 95.00000% : 12483.048us 00:11:50.211 98.00000% : 13419.276us 00:11:50.211 99.00000% : 23218.469us 00:11:50.211 99.50000% : 24217.112us 00:11:50.211 99.90000% : 25340.587us 00:11:50.211 99.99000% : 25715.078us 00:11:50.211 99.99900% : 25715.078us 00:11:50.211 99.99990% : 25715.078us 00:11:50.211 99.99999% : 25715.078us 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 8238.811us 00:11:50.211 10.00000% : 8800.549us 00:11:50.211 25.00000% : 9237.455us 00:11:50.211 50.00000% : 9861.608us 00:11:50.211 75.00000% : 10797.836us 00:11:50.211 90.00000% : 11921.310us 00:11:50.211 95.00000% : 12545.463us 00:11:50.211 98.00000% : 13481.691us 00:11:50.211 99.00000% : 21970.164us 00:11:50.211 99.50000% : 23093.638us 00:11:50.211 99.90000% : 24217.112us 00:11:50.211 99.99000% : 24591.604us 00:11:50.211 99.99900% : 24591.604us 00:11:50.211 99.99990% : 24591.604us 00:11:50.211 99.99999% : 24591.604us 00:11:50.211 00:11:50.211 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:50.211 ================================================================================= 00:11:50.211 1.00000% : 8113.981us 00:11:50.211 10.00000% : 8862.964us 00:11:50.211 25.00000% : 9299.870us 00:11:50.211 50.00000% : 9861.608us 00:11:50.211 75.00000% : 10797.836us 00:11:50.211 90.00000% : 11983.726us 00:11:50.211 95.00000% : 12607.878us 00:11:50.211 98.00000% : 13606.522us 00:11:50.212 99.00000% : 20721.859us 00:11:50.212 99.50000% : 21346.011us 00:11:50.212 99.90000% : 22719.147us 00:11:50.212 99.99000% : 23093.638us 00:11:50.212 99.99900% : 23093.638us 00:11:50.212 99.99990% : 23093.638us 00:11:50.212 99.99999% : 23093.638us 00:11:50.212 00:11:50.212 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:50.212 ============================================================================== 00:11:50.212 Range in us Cumulative IO count 00:11:50.212 6896.884 - 6928.091: 0.0080% ( 1) 00:11:50.212 6928.091 - 6959.299: 0.0399% ( 4) 00:11:50.212 6959.299 - 6990.507: 0.0478% ( 1) 00:11:50.212 7052.922 - 7084.130: 0.0558% ( 1) 00:11:50.212 7177.752 - 7208.960: 0.0638% ( 1) 00:11:50.212 7208.960 - 7240.168: 0.0957% ( 4) 00:11:50.212 7240.168 - 7271.375: 0.1036% ( 1) 00:11:50.212 7271.375 - 7302.583: 0.1116% ( 1) 00:11:50.212 7302.583 - 7333.790: 0.1196% ( 1) 00:11:50.212 7333.790 - 7364.998: 0.1435% ( 3) 00:11:50.212 7364.998 - 7396.206: 0.1594% ( 2) 00:11:50.212 7396.206 - 7427.413: 0.1754% ( 2) 00:11:50.212 7427.413 - 7458.621: 0.1834% ( 1) 00:11:50.212 7458.621 - 7489.829: 0.2152% ( 4) 00:11:50.212 7489.829 - 7521.036: 0.2232% ( 1) 00:11:50.212 7521.036 - 7552.244: 0.2392% ( 2) 00:11:50.212 7552.244 - 7583.451: 0.2950% ( 7) 00:11:50.212 7583.451 - 7614.659: 0.3189% ( 3) 00:11:50.212 7614.659 - 7645.867: 0.3268% ( 1) 00:11:50.212 7645.867 - 7677.074: 0.3747% ( 6) 00:11:50.212 7677.074 - 7708.282: 0.4145% ( 5) 00:11:50.212 7708.282 - 7739.490: 0.5182% ( 13) 00:11:50.212 7739.490 - 7770.697: 0.5660% ( 6) 00:11:50.212 7770.697 - 7801.905: 0.6059% ( 5) 00:11:50.212 7801.905 - 7833.112: 0.6537% ( 6) 00:11:50.212 7833.112 - 7864.320: 0.7254% ( 9) 00:11:50.212 7864.320 - 7895.528: 0.8689% ( 18) 00:11:50.212 7895.528 - 7926.735: 1.0124% ( 18) 00:11:50.212 7926.735 - 7957.943: 1.2117% ( 25) 00:11:50.212 7957.943 - 7989.150: 1.3951% ( 23) 00:11:50.212 7989.150 - 8051.566: 1.7857% ( 49) 00:11:50.212 8051.566 - 8113.981: 2.2640% ( 60) 00:11:50.212 8113.981 - 8176.396: 3.1649% ( 113) 00:11:50.212 8176.396 - 8238.811: 3.9939% ( 104) 00:11:50.212 8238.811 - 8301.227: 4.6556% ( 83) 00:11:50.212 8301.227 - 8363.642: 5.7478% ( 137) 00:11:50.212 8363.642 - 8426.057: 6.5848% ( 105) 00:11:50.212 8426.057 - 8488.472: 7.3581% ( 97) 00:11:50.212 8488.472 - 8550.888: 8.2988% ( 118) 00:11:50.212 8550.888 - 8613.303: 9.4866% ( 149) 00:11:50.212 8613.303 - 8675.718: 11.0013% ( 190) 00:11:50.212 8675.718 - 8738.133: 12.5319% ( 192) 00:11:50.212 8738.133 - 8800.549: 14.7003% ( 272) 00:11:50.212 8800.549 - 8862.964: 16.7092% ( 252) 00:11:50.212 8862.964 - 8925.379: 18.5906% ( 236) 00:11:50.212 8925.379 - 8987.794: 20.3922% ( 226) 00:11:50.212 8987.794 - 9050.210: 22.0584% ( 209) 00:11:50.212 9050.210 - 9112.625: 23.5651% ( 189) 00:11:50.212 9112.625 - 9175.040: 25.6776% ( 265) 00:11:50.212 9175.040 - 9237.455: 27.7344% ( 258) 00:11:50.212 9237.455 - 9299.870: 30.1977% ( 309) 00:11:50.212 9299.870 - 9362.286: 31.9515% ( 220) 00:11:50.212 9362.286 - 9424.701: 34.1438% ( 275) 00:11:50.212 9424.701 - 9487.116: 36.3760% ( 280) 00:11:50.212 9487.116 - 9549.531: 38.4487% ( 260) 00:11:50.212 9549.531 - 9611.947: 40.4416% ( 250) 00:11:50.212 9611.947 - 9674.362: 42.6180% ( 273) 00:11:50.212 9674.362 - 9736.777: 44.6747% ( 258) 00:11:50.212 9736.777 - 9799.192: 47.0584% ( 299) 00:11:50.212 9799.192 - 9861.608: 49.8485% ( 350) 00:11:50.212 9861.608 - 9924.023: 52.2800% ( 305) 00:11:50.212 9924.023 - 9986.438: 55.0303% ( 345) 00:11:50.212 9986.438 - 10048.853: 57.1747% ( 269) 00:11:50.212 10048.853 - 10111.269: 59.5663% ( 300) 00:11:50.212 10111.269 - 10173.684: 61.5593% ( 250) 00:11:50.212 10173.684 - 10236.099: 63.2175% ( 208) 00:11:50.212 10236.099 - 10298.514: 64.9155% ( 213) 00:11:50.212 10298.514 - 10360.930: 66.4222% ( 189) 00:11:50.212 10360.930 - 10423.345: 67.9767% ( 195) 00:11:50.212 10423.345 - 10485.760: 69.4196% ( 181) 00:11:50.212 10485.760 - 10548.175: 70.7350% ( 165) 00:11:50.212 10548.175 - 10610.590: 72.1301% ( 175) 00:11:50.212 10610.590 - 10673.006: 73.1744% ( 131) 00:11:50.212 10673.006 - 10735.421: 74.4101% ( 155) 00:11:50.212 10735.421 - 10797.836: 75.5501% ( 143) 00:11:50.212 10797.836 - 10860.251: 76.8096% ( 158) 00:11:50.212 10860.251 - 10922.667: 77.7822% ( 122) 00:11:50.212 10922.667 - 10985.082: 78.7309% ( 119) 00:11:50.212 10985.082 - 11047.497: 79.6556% ( 116) 00:11:50.212 11047.497 - 11109.912: 80.4448% ( 99) 00:11:50.212 11109.912 - 11172.328: 81.4254% ( 123) 00:11:50.212 11172.328 - 11234.743: 82.1668% ( 93) 00:11:50.212 11234.743 - 11297.158: 82.8444% ( 85) 00:11:50.212 11297.158 - 11359.573: 83.6336% ( 99) 00:11:50.212 11359.573 - 11421.989: 84.3750% ( 93) 00:11:50.212 11421.989 - 11484.404: 85.0048% ( 79) 00:11:50.212 11484.404 - 11546.819: 85.6107% ( 76) 00:11:50.212 11546.819 - 11609.234: 86.2643% ( 82) 00:11:50.212 11609.234 - 11671.650: 86.8383% ( 72) 00:11:50.212 11671.650 - 11734.065: 87.4841% ( 81) 00:11:50.212 11734.065 - 11796.480: 88.1059% ( 78) 00:11:50.212 11796.480 - 11858.895: 88.6480% ( 68) 00:11:50.212 11858.895 - 11921.310: 89.2060% ( 70) 00:11:50.212 11921.310 - 11983.726: 89.7242% ( 65) 00:11:50.212 11983.726 - 12046.141: 90.2423% ( 65) 00:11:50.212 12046.141 - 12108.556: 90.8243% ( 73) 00:11:50.212 12108.556 - 12170.971: 91.3744% ( 69) 00:11:50.212 12170.971 - 12233.387: 91.8766% ( 63) 00:11:50.212 12233.387 - 12295.802: 92.2911% ( 52) 00:11:50.212 12295.802 - 12358.217: 92.8571% ( 71) 00:11:50.212 12358.217 - 12420.632: 93.2797% ( 53) 00:11:50.212 12420.632 - 12483.048: 93.6703% ( 49) 00:11:50.212 12483.048 - 12545.463: 94.0609% ( 49) 00:11:50.212 12545.463 - 12607.878: 94.4037% ( 43) 00:11:50.212 12607.878 - 12670.293: 94.7704% ( 46) 00:11:50.212 12670.293 - 12732.709: 95.0574% ( 36) 00:11:50.212 12732.709 - 12795.124: 95.3284% ( 34) 00:11:50.212 12795.124 - 12857.539: 95.5517% ( 28) 00:11:50.212 12857.539 - 12919.954: 95.7908% ( 30) 00:11:50.212 12919.954 - 12982.370: 96.0539% ( 33) 00:11:50.212 12982.370 - 13044.785: 96.3090% ( 32) 00:11:50.212 13044.785 - 13107.200: 96.5641% ( 32) 00:11:50.212 13107.200 - 13169.615: 96.8033% ( 30) 00:11:50.212 13169.615 - 13232.030: 97.0105% ( 26) 00:11:50.212 13232.030 - 13294.446: 97.1779% ( 21) 00:11:50.212 13294.446 - 13356.861: 97.3533% ( 22) 00:11:50.212 13356.861 - 13419.276: 97.5048% ( 19) 00:11:50.212 13419.276 - 13481.691: 97.6722% ( 21) 00:11:50.212 13481.691 - 13544.107: 97.7918% ( 15) 00:11:50.212 13544.107 - 13606.522: 97.9273% ( 17) 00:11:50.212 13606.522 - 13668.937: 98.0708% ( 18) 00:11:50.212 13668.937 - 13731.352: 98.1824% ( 14) 00:11:50.212 13731.352 - 13793.768: 98.2860% ( 13) 00:11:50.212 13793.768 - 13856.183: 98.4216% ( 17) 00:11:50.212 13856.183 - 13918.598: 98.5172% ( 12) 00:11:50.212 13918.598 - 13981.013: 98.5969% ( 10) 00:11:50.212 13981.013 - 14043.429: 98.6527% ( 7) 00:11:50.212 14043.429 - 14105.844: 98.7006% ( 6) 00:11:50.212 14105.844 - 14168.259: 98.7484% ( 6) 00:11:50.212 14168.259 - 14230.674: 98.7803% ( 4) 00:11:50.212 14230.674 - 14293.090: 98.8361% ( 7) 00:11:50.212 14293.090 - 14355.505: 98.8600% ( 3) 00:11:50.212 14355.505 - 14417.920: 98.8839% ( 3) 00:11:50.212 14417.920 - 14480.335: 98.9078% ( 3) 00:11:50.212 14480.335 - 14542.750: 98.9318% ( 3) 00:11:50.212 14542.750 - 14605.166: 98.9557% ( 3) 00:11:50.212 14605.166 - 14667.581: 98.9796% ( 3) 00:11:50.212 26089.570 - 26214.400: 98.9876% ( 1) 00:11:50.212 26214.400 - 26339.230: 99.0115% ( 3) 00:11:50.212 26339.230 - 26464.061: 99.0673% ( 7) 00:11:50.212 26464.061 - 26588.891: 99.2506% ( 23) 00:11:50.212 26588.891 - 26713.722: 99.3144% ( 8) 00:11:50.212 26713.722 - 26838.552: 99.3383% ( 3) 00:11:50.212 26838.552 - 26963.383: 99.3622% ( 3) 00:11:50.212 26963.383 - 27088.213: 99.3782% ( 2) 00:11:50.212 27088.213 - 27213.044: 99.4101% ( 4) 00:11:50.212 27213.044 - 27337.874: 99.4260% ( 2) 00:11:50.212 27337.874 - 27462.705: 99.4818% ( 7) 00:11:50.212 27462.705 - 27587.535: 99.4978% ( 2) 00:11:50.212 27587.535 - 27712.366: 99.5376% ( 5) 00:11:50.212 27712.366 - 27837.196: 99.5855% ( 6) 00:11:50.212 27837.196 - 27962.027: 99.6173% ( 4) 00:11:50.212 27962.027 - 28086.857: 99.6572% ( 5) 00:11:50.212 28211.688 - 28336.518: 99.6732% ( 2) 00:11:50.212 28336.518 - 28461.349: 99.7050% ( 4) 00:11:50.212 28461.349 - 28586.179: 99.7130% ( 1) 00:11:50.212 28586.179 - 28711.010: 99.7290% ( 2) 00:11:50.212 28711.010 - 28835.840: 99.7927% ( 8) 00:11:50.212 28835.840 - 28960.670: 99.8166% ( 3) 00:11:50.212 28960.670 - 29085.501: 99.8406% ( 3) 00:11:50.212 29085.501 - 29210.331: 99.9362% ( 12) 00:11:50.212 29709.653 - 29834.484: 99.9522% ( 2) 00:11:50.212 29834.484 - 29959.314: 99.9841% ( 4) 00:11:50.212 29959.314 - 30084.145: 100.0000% ( 2) 00:11:50.212 00:11:50.212 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:11:50.212 ============================================================================== 00:11:50.212 Range in us Cumulative IO count 00:11:50.212 7458.621 - 7489.829: 0.0080% ( 1) 00:11:50.212 7489.829 - 7521.036: 0.0159% ( 1) 00:11:50.212 7552.244 - 7583.451: 0.0239% ( 1) 00:11:50.213 7614.659 - 7645.867: 0.0319% ( 1) 00:11:50.213 7645.867 - 7677.074: 0.0478% ( 2) 00:11:50.213 7677.074 - 7708.282: 0.0638% ( 2) 00:11:50.213 7708.282 - 7739.490: 0.0717% ( 1) 00:11:50.213 7739.490 - 7770.697: 0.0877% ( 2) 00:11:50.213 7770.697 - 7801.905: 0.0957% ( 1) 00:11:50.213 7801.905 - 7833.112: 0.1116% ( 2) 00:11:50.213 7833.112 - 7864.320: 0.1435% ( 4) 00:11:50.213 7864.320 - 7895.528: 0.1674% ( 3) 00:11:50.213 7895.528 - 7926.735: 0.1913% ( 3) 00:11:50.213 7926.735 - 7957.943: 0.2232% ( 4) 00:11:50.213 7957.943 - 7989.150: 0.2551% ( 4) 00:11:50.213 7989.150 - 8051.566: 0.4703% ( 27) 00:11:50.213 8051.566 - 8113.981: 1.0523% ( 73) 00:11:50.213 8113.981 - 8176.396: 1.2596% ( 26) 00:11:50.213 8176.396 - 8238.811: 1.9930% ( 92) 00:11:50.213 8238.811 - 8301.227: 2.2242% ( 29) 00:11:50.213 8301.227 - 8363.642: 2.4713% ( 31) 00:11:50.213 8363.642 - 8426.057: 2.7822% ( 39) 00:11:50.213 8426.057 - 8488.472: 3.6432% ( 108) 00:11:50.213 8488.472 - 8550.888: 4.5520% ( 114) 00:11:50.213 8550.888 - 8613.303: 5.1260% ( 72) 00:11:50.213 8613.303 - 8675.718: 5.8434% ( 90) 00:11:50.213 8675.718 - 8738.133: 6.8080% ( 121) 00:11:50.213 8738.133 - 8800.549: 8.0835% ( 160) 00:11:50.213 8800.549 - 8862.964: 9.6062% ( 191) 00:11:50.213 8862.964 - 8925.379: 11.1288% ( 191) 00:11:50.213 8925.379 - 8987.794: 13.5682% ( 306) 00:11:50.213 8987.794 - 9050.210: 15.4735% ( 239) 00:11:50.213 9050.210 - 9112.625: 18.1840% ( 340) 00:11:50.213 9112.625 - 9175.040: 22.1939% ( 503) 00:11:50.213 9175.040 - 9237.455: 25.4624% ( 410) 00:11:50.213 9237.455 - 9299.870: 28.3402% ( 361) 00:11:50.213 9299.870 - 9362.286: 30.6601% ( 291) 00:11:50.213 9362.286 - 9424.701: 33.9525% ( 413) 00:11:50.213 9424.701 - 9487.116: 37.4841% ( 443) 00:11:50.213 9487.116 - 9549.531: 40.1786% ( 338) 00:11:50.213 9549.531 - 9611.947: 42.7774% ( 326) 00:11:50.213 9611.947 - 9674.362: 45.2248% ( 307) 00:11:50.213 9674.362 - 9736.777: 49.6333% ( 553) 00:11:50.213 9736.777 - 9799.192: 52.0807% ( 307) 00:11:50.213 9799.192 - 9861.608: 55.8594% ( 474) 00:11:50.213 9861.608 - 9924.023: 57.9082% ( 257) 00:11:50.213 9924.023 - 9986.438: 59.3909% ( 186) 00:11:50.213 9986.438 - 10048.853: 61.3600% ( 247) 00:11:50.213 10048.853 - 10111.269: 62.6913% ( 167) 00:11:50.213 10111.269 - 10173.684: 64.0306% ( 168) 00:11:50.213 10173.684 - 10236.099: 65.2982% ( 159) 00:11:50.213 10236.099 - 10298.514: 66.9723% ( 210) 00:11:50.213 10298.514 - 10360.930: 68.0564% ( 136) 00:11:50.213 10360.930 - 10423.345: 69.0848% ( 129) 00:11:50.213 10423.345 - 10485.760: 70.2408% ( 145) 00:11:50.213 10485.760 - 10548.175: 71.3728% ( 142) 00:11:50.213 10548.175 - 10610.590: 73.4614% ( 262) 00:11:50.213 10610.590 - 10673.006: 74.7369% ( 160) 00:11:50.213 10673.006 - 10735.421: 75.6776% ( 118) 00:11:50.213 10735.421 - 10797.836: 76.5784% ( 113) 00:11:50.213 10797.836 - 10860.251: 77.2959% ( 90) 00:11:50.213 10860.251 - 10922.667: 77.9974% ( 88) 00:11:50.213 10922.667 - 10985.082: 78.7388% ( 93) 00:11:50.213 10985.082 - 11047.497: 79.4882% ( 94) 00:11:50.213 11047.497 - 11109.912: 80.2535% ( 96) 00:11:50.213 11109.912 - 11172.328: 81.1224% ( 109) 00:11:50.213 11172.328 - 11234.743: 81.9994% ( 110) 00:11:50.213 11234.743 - 11297.158: 82.8763% ( 110) 00:11:50.213 11297.158 - 11359.573: 83.6575% ( 98) 00:11:50.213 11359.573 - 11421.989: 84.4388% ( 98) 00:11:50.213 11421.989 - 11484.404: 85.2041% ( 96) 00:11:50.213 11484.404 - 11546.819: 86.0571% ( 107) 00:11:50.213 11546.819 - 11609.234: 86.6869% ( 79) 00:11:50.213 11609.234 - 11671.650: 87.2608% ( 72) 00:11:50.213 11671.650 - 11734.065: 87.8348% ( 72) 00:11:50.213 11734.065 - 11796.480: 88.4965% ( 83) 00:11:50.213 11796.480 - 11858.895: 89.1661% ( 84) 00:11:50.213 11858.895 - 11921.310: 89.8517% ( 86) 00:11:50.213 11921.310 - 11983.726: 90.4337% ( 73) 00:11:50.213 11983.726 - 12046.141: 91.0236% ( 74) 00:11:50.213 12046.141 - 12108.556: 91.6693% ( 81) 00:11:50.213 12108.556 - 12170.971: 92.2592% ( 74) 00:11:50.213 12170.971 - 12233.387: 92.7455% ( 61) 00:11:50.213 12233.387 - 12295.802: 93.2239% ( 60) 00:11:50.213 12295.802 - 12358.217: 93.7101% ( 61) 00:11:50.213 12358.217 - 12420.632: 94.1725% ( 58) 00:11:50.213 12420.632 - 12483.048: 94.5711% ( 50) 00:11:50.213 12483.048 - 12545.463: 94.9857% ( 52) 00:11:50.213 12545.463 - 12607.878: 95.3364% ( 44) 00:11:50.213 12607.878 - 12670.293: 95.6792% ( 43) 00:11:50.213 12670.293 - 12732.709: 95.9901% ( 39) 00:11:50.213 12732.709 - 12795.124: 96.2213% ( 29) 00:11:50.213 12795.124 - 12857.539: 96.4365% ( 27) 00:11:50.213 12857.539 - 12919.954: 96.6518% ( 27) 00:11:50.213 12919.954 - 12982.370: 96.8192% ( 21) 00:11:50.213 12982.370 - 13044.785: 97.0026% ( 23) 00:11:50.213 13044.785 - 13107.200: 97.1700% ( 21) 00:11:50.213 13107.200 - 13169.615: 97.3533% ( 23) 00:11:50.213 13169.615 - 13232.030: 97.5367% ( 23) 00:11:50.213 13232.030 - 13294.446: 97.6961% ( 20) 00:11:50.213 13294.446 - 13356.861: 97.8396% ( 18) 00:11:50.213 13356.861 - 13419.276: 97.9672% ( 16) 00:11:50.213 13419.276 - 13481.691: 98.1027% ( 17) 00:11:50.213 13481.691 - 13544.107: 98.2143% ( 14) 00:11:50.213 13544.107 - 13606.522: 98.3339% ( 15) 00:11:50.213 13606.522 - 13668.937: 98.4375% ( 13) 00:11:50.213 13668.937 - 13731.352: 98.5491% ( 14) 00:11:50.213 13731.352 - 13793.768: 98.6607% ( 14) 00:11:50.213 13793.768 - 13856.183: 98.7803% ( 15) 00:11:50.213 13856.183 - 13918.598: 98.8600% ( 10) 00:11:50.213 13918.598 - 13981.013: 98.9238% ( 8) 00:11:50.213 13981.013 - 14043.429: 98.9716% ( 6) 00:11:50.213 14043.429 - 14105.844: 98.9796% ( 1) 00:11:50.213 25590.248 - 25715.078: 98.9955% ( 2) 00:11:50.213 25715.078 - 25839.909: 99.1071% ( 14) 00:11:50.213 25839.909 - 25964.739: 99.2108% ( 13) 00:11:50.213 25964.739 - 26089.570: 99.2347% ( 3) 00:11:50.213 26089.570 - 26214.400: 99.2586% ( 3) 00:11:50.213 26214.400 - 26339.230: 99.2825% ( 3) 00:11:50.213 26339.230 - 26464.061: 99.3304% ( 6) 00:11:50.213 26464.061 - 26588.891: 99.4420% ( 14) 00:11:50.213 26588.891 - 26713.722: 99.5456% ( 13) 00:11:50.213 26713.722 - 26838.552: 99.6413% ( 12) 00:11:50.213 26838.552 - 26963.383: 99.7608% ( 15) 00:11:50.213 26963.383 - 27088.213: 99.8724% ( 14) 00:11:50.213 27088.213 - 27213.044: 99.9601% ( 11) 00:11:50.213 27213.044 - 27337.874: 99.9920% ( 4) 00:11:50.213 27712.366 - 27837.196: 100.0000% ( 1) 00:11:50.213 00:11:50.213 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:11:50.213 ============================================================================== 00:11:50.213 Range in us Cumulative IO count 00:11:50.213 7115.337 - 7146.545: 0.0080% ( 1) 00:11:50.213 7271.375 - 7302.583: 0.0159% ( 1) 00:11:50.213 7427.413 - 7458.621: 0.0478% ( 4) 00:11:50.213 7458.621 - 7489.829: 0.0877% ( 5) 00:11:50.213 7489.829 - 7521.036: 0.1196% ( 4) 00:11:50.213 7521.036 - 7552.244: 0.1754% ( 7) 00:11:50.213 7552.244 - 7583.451: 0.2232% ( 6) 00:11:50.213 7583.451 - 7614.659: 0.2790% ( 7) 00:11:50.213 7614.659 - 7645.867: 0.3667% ( 11) 00:11:50.213 7645.867 - 7677.074: 0.4624% ( 12) 00:11:50.213 7677.074 - 7708.282: 0.5501% ( 11) 00:11:50.213 7708.282 - 7739.490: 0.6457% ( 12) 00:11:50.213 7739.490 - 7770.697: 0.7494% ( 13) 00:11:50.213 7770.697 - 7801.905: 0.8530% ( 13) 00:11:50.213 7801.905 - 7833.112: 1.2197% ( 46) 00:11:50.213 7833.112 - 7864.320: 1.3473% ( 16) 00:11:50.213 7864.320 - 7895.528: 1.4748% ( 16) 00:11:50.213 7895.528 - 7926.735: 1.5625% ( 11) 00:11:50.213 7926.735 - 7957.943: 1.7379% ( 22) 00:11:50.213 7957.943 - 7989.150: 1.8734% ( 17) 00:11:50.213 7989.150 - 8051.566: 2.5430% ( 84) 00:11:50.213 8051.566 - 8113.981: 2.9895% ( 56) 00:11:50.213 8113.981 - 8176.396: 3.7548% ( 96) 00:11:50.213 8176.396 - 8238.811: 4.2012% ( 56) 00:11:50.213 8238.811 - 8301.227: 4.7672% ( 71) 00:11:50.213 8301.227 - 8363.642: 5.3811% ( 77) 00:11:50.213 8363.642 - 8426.057: 6.1065% ( 91) 00:11:50.214 8426.057 - 8488.472: 6.9196% ( 102) 00:11:50.214 8488.472 - 8550.888: 7.8603% ( 118) 00:11:50.214 8550.888 - 8613.303: 8.6496% ( 99) 00:11:50.214 8613.303 - 8675.718: 9.5344% ( 111) 00:11:50.214 8675.718 - 8738.133: 10.5150% ( 123) 00:11:50.214 8738.133 - 8800.549: 11.4796% ( 121) 00:11:50.214 8800.549 - 8862.964: 12.8747% ( 175) 00:11:50.214 8862.964 - 8925.379: 14.6524% ( 223) 00:11:50.214 8925.379 - 8987.794: 16.2070% ( 195) 00:11:50.214 8987.794 - 9050.210: 18.0006% ( 225) 00:11:50.214 9050.210 - 9112.625: 20.5038% ( 314) 00:11:50.214 9112.625 - 9175.040: 22.7599% ( 283) 00:11:50.214 9175.040 - 9237.455: 24.8326% ( 260) 00:11:50.214 9237.455 - 9299.870: 27.2003% ( 297) 00:11:50.214 9299.870 - 9362.286: 30.1578% ( 371) 00:11:50.214 9362.286 - 9424.701: 32.4777% ( 291) 00:11:50.214 9424.701 - 9487.116: 34.8772% ( 301) 00:11:50.214 9487.116 - 9549.531: 37.3087% ( 305) 00:11:50.214 9549.531 - 9611.947: 40.0510% ( 344) 00:11:50.214 9611.947 - 9674.362: 42.6260% ( 323) 00:11:50.214 9674.362 - 9736.777: 45.2168% ( 325) 00:11:50.214 9736.777 - 9799.192: 47.7280% ( 315) 00:11:50.214 9799.192 - 9861.608: 50.0478% ( 291) 00:11:50.214 9861.608 - 9924.023: 53.0373% ( 375) 00:11:50.214 9924.023 - 9986.438: 55.2615% ( 279) 00:11:50.214 9986.438 - 10048.853: 57.5494% ( 287) 00:11:50.214 10048.853 - 10111.269: 59.2315% ( 211) 00:11:50.214 10111.269 - 10173.684: 61.0730% ( 231) 00:11:50.214 10173.684 - 10236.099: 62.9783% ( 239) 00:11:50.214 10236.099 - 10298.514: 64.3017% ( 166) 00:11:50.214 10298.514 - 10360.930: 65.5851% ( 161) 00:11:50.214 10360.930 - 10423.345: 66.9643% ( 173) 00:11:50.214 10423.345 - 10485.760: 68.4790% ( 190) 00:11:50.214 10485.760 - 10548.175: 69.8980% ( 178) 00:11:50.214 10548.175 - 10610.590: 71.1256% ( 154) 00:11:50.214 10610.590 - 10673.006: 72.1460% ( 128) 00:11:50.214 10673.006 - 10735.421: 73.3578% ( 152) 00:11:50.214 10735.421 - 10797.836: 74.2586% ( 113) 00:11:50.214 10797.836 - 10860.251: 75.3189% ( 133) 00:11:50.214 10860.251 - 10922.667: 76.4031% ( 136) 00:11:50.214 10922.667 - 10985.082: 77.6228% ( 153) 00:11:50.214 10985.082 - 11047.497: 78.6511% ( 129) 00:11:50.214 11047.497 - 11109.912: 79.7114% ( 133) 00:11:50.214 11109.912 - 11172.328: 80.8195% ( 139) 00:11:50.214 11172.328 - 11234.743: 81.8878% ( 134) 00:11:50.214 11234.743 - 11297.158: 82.9002% ( 127) 00:11:50.214 11297.158 - 11359.573: 84.0322% ( 142) 00:11:50.214 11359.573 - 11421.989: 84.8772% ( 106) 00:11:50.214 11421.989 - 11484.404: 85.6266% ( 94) 00:11:50.214 11484.404 - 11546.819: 86.4078% ( 98) 00:11:50.214 11546.819 - 11609.234: 87.2210% ( 102) 00:11:50.214 11609.234 - 11671.650: 87.8109% ( 74) 00:11:50.214 11671.650 - 11734.065: 88.4327% ( 78) 00:11:50.214 11734.065 - 11796.480: 89.0067% ( 72) 00:11:50.214 11796.480 - 11858.895: 89.6684% ( 83) 00:11:50.214 11858.895 - 11921.310: 90.2264% ( 70) 00:11:50.214 11921.310 - 11983.726: 90.7366% ( 64) 00:11:50.214 11983.726 - 12046.141: 91.2229% ( 61) 00:11:50.214 12046.141 - 12108.556: 91.7012% ( 60) 00:11:50.214 12108.556 - 12170.971: 92.1795% ( 60) 00:11:50.214 12170.971 - 12233.387: 92.6260% ( 56) 00:11:50.214 12233.387 - 12295.802: 93.1202% ( 62) 00:11:50.214 12295.802 - 12358.217: 93.5587% ( 55) 00:11:50.214 12358.217 - 12420.632: 94.0131% ( 57) 00:11:50.214 12420.632 - 12483.048: 94.4276% ( 52) 00:11:50.214 12483.048 - 12545.463: 94.8182% ( 49) 00:11:50.214 12545.463 - 12607.878: 95.1610% ( 43) 00:11:50.214 12607.878 - 12670.293: 95.5118% ( 44) 00:11:50.214 12670.293 - 12732.709: 95.8068% ( 37) 00:11:50.214 12732.709 - 12795.124: 96.1017% ( 37) 00:11:50.214 12795.124 - 12857.539: 96.4047% ( 38) 00:11:50.214 12857.539 - 12919.954: 96.6279% ( 28) 00:11:50.214 12919.954 - 12982.370: 96.8511% ( 28) 00:11:50.214 12982.370 - 13044.785: 97.0743% ( 28) 00:11:50.214 13044.785 - 13107.200: 97.2577% ( 23) 00:11:50.214 13107.200 - 13169.615: 97.4251% ( 21) 00:11:50.214 13169.615 - 13232.030: 97.5925% ( 21) 00:11:50.214 13232.030 - 13294.446: 97.7599% ( 21) 00:11:50.214 13294.446 - 13356.861: 97.9353% ( 22) 00:11:50.214 13356.861 - 13419.276: 98.0947% ( 20) 00:11:50.214 13419.276 - 13481.691: 98.2382% ( 18) 00:11:50.214 13481.691 - 13544.107: 98.3658% ( 16) 00:11:50.214 13544.107 - 13606.522: 98.4853% ( 15) 00:11:50.214 13606.522 - 13668.937: 98.5810% ( 12) 00:11:50.214 13668.937 - 13731.352: 98.6767% ( 12) 00:11:50.214 13731.352 - 13793.768: 98.7723% ( 12) 00:11:50.214 13793.768 - 13856.183: 98.8520% ( 10) 00:11:50.214 13856.183 - 13918.598: 98.8919% ( 5) 00:11:50.214 13918.598 - 13981.013: 98.9477% ( 7) 00:11:50.214 13981.013 - 14043.429: 98.9796% ( 4) 00:11:50.214 23842.621 - 23967.451: 98.9876% ( 1) 00:11:50.214 23967.451 - 24092.282: 99.0115% ( 3) 00:11:50.214 24092.282 - 24217.112: 99.0434% ( 4) 00:11:50.214 24217.112 - 24341.943: 99.0753% ( 4) 00:11:50.214 24341.943 - 24466.773: 99.1071% ( 4) 00:11:50.214 24466.773 - 24591.604: 99.1390% ( 4) 00:11:50.214 24591.604 - 24716.434: 99.2188% ( 10) 00:11:50.214 24716.434 - 24841.265: 99.3383% ( 15) 00:11:50.214 24841.265 - 24966.095: 99.5217% ( 23) 00:11:50.214 24966.095 - 25090.926: 99.5934% ( 9) 00:11:50.214 25090.926 - 25215.756: 99.6492% ( 7) 00:11:50.214 25215.756 - 25340.587: 99.6891% ( 5) 00:11:50.214 25340.587 - 25465.417: 99.7130% ( 3) 00:11:50.214 25465.417 - 25590.248: 99.7369% ( 3) 00:11:50.214 25590.248 - 25715.078: 99.7688% ( 4) 00:11:50.214 25715.078 - 25839.909: 99.7927% ( 3) 00:11:50.214 25839.909 - 25964.739: 99.8166% ( 3) 00:11:50.214 25964.739 - 26089.570: 99.8485% ( 4) 00:11:50.214 26089.570 - 26214.400: 99.8724% ( 3) 00:11:50.214 26214.400 - 26339.230: 99.9043% ( 4) 00:11:50.214 26339.230 - 26464.061: 99.9283% ( 3) 00:11:50.214 26464.061 - 26588.891: 99.9601% ( 4) 00:11:50.214 26588.891 - 26713.722: 99.9841% ( 3) 00:11:50.214 26713.722 - 26838.552: 100.0000% ( 2) 00:11:50.214 00:11:50.214 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:11:50.214 ============================================================================== 00:11:50.214 Range in us Cumulative IO count 00:11:50.214 7333.790 - 7364.998: 0.0080% ( 1) 00:11:50.214 7521.036 - 7552.244: 0.0159% ( 1) 00:11:50.214 7583.451 - 7614.659: 0.0239% ( 1) 00:11:50.214 7614.659 - 7645.867: 0.0558% ( 4) 00:11:50.214 7645.867 - 7677.074: 0.0877% ( 4) 00:11:50.214 7677.074 - 7708.282: 0.1276% ( 5) 00:11:50.214 7708.282 - 7739.490: 0.1594% ( 4) 00:11:50.214 7739.490 - 7770.697: 0.1913% ( 4) 00:11:50.214 7770.697 - 7801.905: 0.2631% ( 9) 00:11:50.214 7801.905 - 7833.112: 0.3348% ( 9) 00:11:50.214 7833.112 - 7864.320: 0.4305% ( 12) 00:11:50.214 7864.320 - 7895.528: 0.5182% ( 11) 00:11:50.214 7895.528 - 7926.735: 0.6138% ( 12) 00:11:50.214 7926.735 - 7957.943: 0.7573% ( 18) 00:11:50.214 7957.943 - 7989.150: 0.9646% ( 26) 00:11:50.214 7989.150 - 8051.566: 1.3871% ( 53) 00:11:50.214 8051.566 - 8113.981: 1.8654% ( 60) 00:11:50.214 8113.981 - 8176.396: 2.6228% ( 95) 00:11:50.214 8176.396 - 8238.811: 3.1330% ( 64) 00:11:50.214 8238.811 - 8301.227: 3.6511% ( 65) 00:11:50.214 8301.227 - 8363.642: 4.3447% ( 87) 00:11:50.214 8363.642 - 8426.057: 5.2136% ( 109) 00:11:50.214 8426.057 - 8488.472: 6.2500% ( 130) 00:11:50.214 8488.472 - 8550.888: 6.9436% ( 87) 00:11:50.214 8550.888 - 8613.303: 7.7966% ( 107) 00:11:50.214 8613.303 - 8675.718: 8.7293% ( 117) 00:11:50.214 8675.718 - 8738.133: 9.7497% ( 128) 00:11:50.214 8738.133 - 8800.549: 11.1767% ( 179) 00:11:50.214 8800.549 - 8862.964: 12.7232% ( 194) 00:11:50.214 8862.964 - 8925.379: 14.1502% ( 179) 00:11:50.214 8925.379 - 8987.794: 15.9279% ( 223) 00:11:50.214 8987.794 - 9050.210: 17.7296% ( 226) 00:11:50.214 9050.210 - 9112.625: 20.1052% ( 298) 00:11:50.214 9112.625 - 9175.040: 22.4968% ( 300) 00:11:50.214 9175.040 - 9237.455: 25.0239% ( 317) 00:11:50.214 9237.455 - 9299.870: 27.6945% ( 335) 00:11:50.214 9299.870 - 9362.286: 30.4050% ( 340) 00:11:50.214 9362.286 - 9424.701: 33.3546% ( 370) 00:11:50.214 9424.701 - 9487.116: 36.0969% ( 344) 00:11:50.214 9487.116 - 9549.531: 37.9624% ( 234) 00:11:50.214 9549.531 - 9611.947: 40.2503% ( 287) 00:11:50.214 9611.947 - 9674.362: 42.9927% ( 344) 00:11:50.214 9674.362 - 9736.777: 45.5835% ( 325) 00:11:50.214 9736.777 - 9799.192: 48.5730% ( 375) 00:11:50.214 9799.192 - 9861.608: 51.0842% ( 315) 00:11:50.214 9861.608 - 9924.023: 53.1489% ( 259) 00:11:50.214 9924.023 - 9986.438: 55.2376% ( 262) 00:11:50.214 9986.438 - 10048.853: 57.1030% ( 234) 00:11:50.214 10048.853 - 10111.269: 59.2156% ( 265) 00:11:50.214 10111.269 - 10173.684: 61.2006% ( 249) 00:11:50.214 10173.684 - 10236.099: 62.9145% ( 215) 00:11:50.214 10236.099 - 10298.514: 64.5169% ( 201) 00:11:50.214 10298.514 - 10360.930: 66.1113% ( 200) 00:11:50.214 10360.930 - 10423.345: 67.7057% ( 200) 00:11:50.214 10423.345 - 10485.760: 69.0689% ( 171) 00:11:50.214 10485.760 - 10548.175: 70.2726% ( 151) 00:11:50.214 10548.175 - 10610.590: 71.4923% ( 153) 00:11:50.214 10610.590 - 10673.006: 72.6642% ( 147) 00:11:50.214 10673.006 - 10735.421: 73.7962% ( 142) 00:11:50.215 10735.421 - 10797.836: 74.8087% ( 127) 00:11:50.215 10797.836 - 10860.251: 75.8211% ( 127) 00:11:50.215 10860.251 - 10922.667: 76.7937% ( 122) 00:11:50.215 10922.667 - 10985.082: 77.8061% ( 127) 00:11:50.215 10985.082 - 11047.497: 78.8903% ( 136) 00:11:50.215 11047.497 - 11109.912: 80.0064% ( 140) 00:11:50.215 11109.912 - 11172.328: 80.9949% ( 124) 00:11:50.215 11172.328 - 11234.743: 82.0312% ( 130) 00:11:50.215 11234.743 - 11297.158: 83.1234% ( 137) 00:11:50.215 11297.158 - 11359.573: 84.1279% ( 126) 00:11:50.215 11359.573 - 11421.989: 84.9091% ( 98) 00:11:50.215 11421.989 - 11484.404: 85.6744% ( 96) 00:11:50.215 11484.404 - 11546.819: 86.4477% ( 97) 00:11:50.215 11546.819 - 11609.234: 87.1173% ( 84) 00:11:50.215 11609.234 - 11671.650: 87.7073% ( 74) 00:11:50.215 11671.650 - 11734.065: 88.1936% ( 61) 00:11:50.215 11734.065 - 11796.480: 88.8393% ( 81) 00:11:50.215 11796.480 - 11858.895: 89.4452% ( 76) 00:11:50.215 11858.895 - 11921.310: 89.9633% ( 65) 00:11:50.215 11921.310 - 11983.726: 90.5134% ( 69) 00:11:50.215 11983.726 - 12046.141: 91.0953% ( 73) 00:11:50.215 12046.141 - 12108.556: 91.7012% ( 76) 00:11:50.215 12108.556 - 12170.971: 92.3230% ( 78) 00:11:50.215 12170.971 - 12233.387: 92.8571% ( 67) 00:11:50.215 12233.387 - 12295.802: 93.4232% ( 71) 00:11:50.215 12295.802 - 12358.217: 93.9573% ( 67) 00:11:50.215 12358.217 - 12420.632: 94.5392% ( 73) 00:11:50.215 12420.632 - 12483.048: 95.0574% ( 65) 00:11:50.215 12483.048 - 12545.463: 95.3444% ( 36) 00:11:50.215 12545.463 - 12607.878: 95.6154% ( 34) 00:11:50.215 12607.878 - 12670.293: 95.8466% ( 29) 00:11:50.215 12670.293 - 12732.709: 96.0698% ( 28) 00:11:50.215 12732.709 - 12795.124: 96.2771% ( 26) 00:11:50.215 12795.124 - 12857.539: 96.4684% ( 24) 00:11:50.215 12857.539 - 12919.954: 96.6916% ( 28) 00:11:50.215 12919.954 - 12982.370: 96.8830% ( 24) 00:11:50.215 12982.370 - 13044.785: 97.0982% ( 27) 00:11:50.215 13044.785 - 13107.200: 97.3135% ( 27) 00:11:50.215 13107.200 - 13169.615: 97.4809% ( 21) 00:11:50.215 13169.615 - 13232.030: 97.6403% ( 20) 00:11:50.215 13232.030 - 13294.446: 97.7918% ( 19) 00:11:50.215 13294.446 - 13356.861: 97.9114% ( 15) 00:11:50.215 13356.861 - 13419.276: 98.0389% ( 16) 00:11:50.215 13419.276 - 13481.691: 98.1585% ( 15) 00:11:50.215 13481.691 - 13544.107: 98.2781% ( 15) 00:11:50.215 13544.107 - 13606.522: 98.3817% ( 13) 00:11:50.215 13606.522 - 13668.937: 98.4774% ( 12) 00:11:50.215 13668.937 - 13731.352: 98.5969% ( 15) 00:11:50.215 13731.352 - 13793.768: 98.6687% ( 9) 00:11:50.215 13793.768 - 13856.183: 98.7325% ( 8) 00:11:50.215 13856.183 - 13918.598: 98.7803% ( 6) 00:11:50.215 13918.598 - 13981.013: 98.8281% ( 6) 00:11:50.215 13981.013 - 14043.429: 98.8760% ( 6) 00:11:50.215 14043.429 - 14105.844: 98.9318% ( 7) 00:11:50.215 14105.844 - 14168.259: 98.9796% ( 6) 00:11:50.215 23093.638 - 23218.469: 99.0115% ( 4) 00:11:50.215 23218.469 - 23343.299: 99.0354% ( 3) 00:11:50.215 23343.299 - 23468.130: 99.0673% ( 4) 00:11:50.215 23468.130 - 23592.960: 99.0992% ( 4) 00:11:50.215 23592.960 - 23717.790: 99.1311% ( 4) 00:11:50.215 23717.790 - 23842.621: 99.1550% ( 3) 00:11:50.215 23842.621 - 23967.451: 99.1789% ( 3) 00:11:50.215 23967.451 - 24092.282: 99.3144% ( 17) 00:11:50.215 24092.282 - 24217.112: 99.5217% ( 26) 00:11:50.215 24217.112 - 24341.943: 99.6652% ( 18) 00:11:50.215 24341.943 - 24466.773: 99.7210% ( 7) 00:11:50.215 24466.773 - 24591.604: 99.7608% ( 5) 00:11:50.215 24591.604 - 24716.434: 99.7927% ( 4) 00:11:50.215 24716.434 - 24841.265: 99.8166% ( 3) 00:11:50.215 24841.265 - 24966.095: 99.8485% ( 4) 00:11:50.215 24966.095 - 25090.926: 99.8645% ( 2) 00:11:50.215 25090.926 - 25215.756: 99.8884% ( 3) 00:11:50.215 25215.756 - 25340.587: 99.9203% ( 4) 00:11:50.215 25340.587 - 25465.417: 99.9442% ( 3) 00:11:50.215 25465.417 - 25590.248: 99.9761% ( 4) 00:11:50.215 25590.248 - 25715.078: 100.0000% ( 3) 00:11:50.215 00:11:50.215 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:11:50.215 ============================================================================== 00:11:50.215 Range in us Cumulative IO count 00:11:50.215 7614.659 - 7645.867: 0.0080% ( 1) 00:11:50.215 7677.074 - 7708.282: 0.0239% ( 2) 00:11:50.215 7708.282 - 7739.490: 0.0399% ( 2) 00:11:50.215 7739.490 - 7770.697: 0.0638% ( 3) 00:11:50.215 7770.697 - 7801.905: 0.0797% ( 2) 00:11:50.215 7801.905 - 7833.112: 0.1036% ( 3) 00:11:50.215 7833.112 - 7864.320: 0.1355% ( 4) 00:11:50.215 7864.320 - 7895.528: 0.1754% ( 5) 00:11:50.215 7895.528 - 7926.735: 0.2073% ( 4) 00:11:50.215 7926.735 - 7957.943: 0.2631% ( 7) 00:11:50.215 7957.943 - 7989.150: 0.3268% ( 8) 00:11:50.215 7989.150 - 8051.566: 0.4703% ( 18) 00:11:50.215 8051.566 - 8113.981: 0.6138% ( 18) 00:11:50.215 8113.981 - 8176.396: 0.8211% ( 26) 00:11:50.215 8176.396 - 8238.811: 1.1958% ( 47) 00:11:50.215 8238.811 - 8301.227: 1.8335% ( 80) 00:11:50.215 8301.227 - 8363.642: 2.4793% ( 81) 00:11:50.215 8363.642 - 8426.057: 3.4200% ( 118) 00:11:50.215 8426.057 - 8488.472: 4.1454% ( 91) 00:11:50.215 8488.472 - 8550.888: 5.1339% ( 124) 00:11:50.215 8550.888 - 8613.303: 6.2420% ( 139) 00:11:50.215 8613.303 - 8675.718: 7.7567% ( 190) 00:11:50.215 8675.718 - 8738.133: 9.5265% ( 222) 00:11:50.215 8738.133 - 8800.549: 10.8976% ( 172) 00:11:50.215 8800.549 - 8862.964: 12.5957% ( 213) 00:11:50.215 8862.964 - 8925.379: 14.7959% ( 276) 00:11:50.215 8925.379 - 8987.794: 17.1955% ( 301) 00:11:50.215 8987.794 - 9050.210: 19.1566% ( 246) 00:11:50.215 9050.210 - 9112.625: 21.2213% ( 259) 00:11:50.215 9112.625 - 9175.040: 23.4455% ( 279) 00:11:50.215 9175.040 - 9237.455: 25.5979% ( 270) 00:11:50.215 9237.455 - 9299.870: 27.9257% ( 292) 00:11:50.215 9299.870 - 9362.286: 30.6840% ( 346) 00:11:50.215 9362.286 - 9424.701: 33.6894% ( 377) 00:11:50.215 9424.701 - 9487.116: 36.2245% ( 318) 00:11:50.215 9487.116 - 9549.531: 38.6958% ( 310) 00:11:50.215 9549.531 - 9611.947: 41.1113% ( 303) 00:11:50.215 9611.947 - 9674.362: 43.6464% ( 318) 00:11:50.215 9674.362 - 9736.777: 45.8546% ( 277) 00:11:50.215 9736.777 - 9799.192: 48.7245% ( 360) 00:11:50.215 9799.192 - 9861.608: 51.2038% ( 311) 00:11:50.215 9861.608 - 9924.023: 53.5635% ( 296) 00:11:50.215 9924.023 - 9986.438: 55.8036% ( 281) 00:11:50.215 9986.438 - 10048.853: 57.7806% ( 248) 00:11:50.215 10048.853 - 10111.269: 59.5743% ( 225) 00:11:50.215 10111.269 - 10173.684: 61.4238% ( 232) 00:11:50.215 10173.684 - 10236.099: 62.9703% ( 194) 00:11:50.215 10236.099 - 10298.514: 64.6604% ( 212) 00:11:50.215 10298.514 - 10360.930: 66.2867% ( 204) 00:11:50.215 10360.930 - 10423.345: 68.3594% ( 260) 00:11:50.215 10423.345 - 10485.760: 69.6588% ( 163) 00:11:50.215 10485.760 - 10548.175: 70.8068% ( 144) 00:11:50.215 10548.175 - 10610.590: 71.9149% ( 139) 00:11:50.215 10610.590 - 10673.006: 72.9353% ( 128) 00:11:50.215 10673.006 - 10735.421: 73.9876% ( 132) 00:11:50.215 10735.421 - 10797.836: 75.0558% ( 134) 00:11:50.215 10797.836 - 10860.251: 75.9247% ( 109) 00:11:50.215 10860.251 - 10922.667: 76.8734% ( 119) 00:11:50.215 10922.667 - 10985.082: 77.9337% ( 133) 00:11:50.215 10985.082 - 11047.497: 79.1693% ( 155) 00:11:50.215 11047.497 - 11109.912: 80.4528% ( 161) 00:11:50.215 11109.912 - 11172.328: 81.6087% ( 145) 00:11:50.215 11172.328 - 11234.743: 82.5893% ( 123) 00:11:50.215 11234.743 - 11297.158: 83.4821% ( 112) 00:11:50.215 11297.158 - 11359.573: 84.4308% ( 119) 00:11:50.215 11359.573 - 11421.989: 85.1881% ( 95) 00:11:50.215 11421.989 - 11484.404: 86.0969% ( 114) 00:11:50.215 11484.404 - 11546.819: 86.7825% ( 86) 00:11:50.215 11546.819 - 11609.234: 87.3485% ( 71) 00:11:50.215 11609.234 - 11671.650: 87.8986% ( 69) 00:11:50.215 11671.650 - 11734.065: 88.4885% ( 74) 00:11:50.215 11734.065 - 11796.480: 89.1980% ( 89) 00:11:50.215 11796.480 - 11858.895: 89.7879% ( 74) 00:11:50.215 11858.895 - 11921.310: 90.4257% ( 80) 00:11:50.215 11921.310 - 11983.726: 91.0953% ( 84) 00:11:50.215 11983.726 - 12046.141: 91.8208% ( 91) 00:11:50.216 12046.141 - 12108.556: 92.3948% ( 72) 00:11:50.216 12108.556 - 12170.971: 92.9289% ( 67) 00:11:50.216 12170.971 - 12233.387: 93.3913% ( 58) 00:11:50.216 12233.387 - 12295.802: 93.8776% ( 61) 00:11:50.216 12295.802 - 12358.217: 94.1885% ( 39) 00:11:50.216 12358.217 - 12420.632: 94.4914% ( 38) 00:11:50.216 12420.632 - 12483.048: 94.7864% ( 37) 00:11:50.216 12483.048 - 12545.463: 95.0893% ( 38) 00:11:50.216 12545.463 - 12607.878: 95.3364% ( 31) 00:11:50.216 12607.878 - 12670.293: 95.5756% ( 30) 00:11:50.216 12670.293 - 12732.709: 95.7988% ( 28) 00:11:50.216 12732.709 - 12795.124: 96.0140% ( 27) 00:11:50.216 12795.124 - 12857.539: 96.2612% ( 31) 00:11:50.216 12857.539 - 12919.954: 96.4605% ( 25) 00:11:50.216 12919.954 - 12982.370: 96.6757% ( 27) 00:11:50.216 12982.370 - 13044.785: 96.8830% ( 26) 00:11:50.216 13044.785 - 13107.200: 97.1142% ( 29) 00:11:50.216 13107.200 - 13169.615: 97.2895% ( 22) 00:11:50.216 13169.615 - 13232.030: 97.4649% ( 22) 00:11:50.216 13232.030 - 13294.446: 97.6164% ( 19) 00:11:50.216 13294.446 - 13356.861: 97.7679% ( 19) 00:11:50.216 13356.861 - 13419.276: 97.9512% ( 23) 00:11:50.216 13419.276 - 13481.691: 98.1027% ( 19) 00:11:50.216 13481.691 - 13544.107: 98.2462% ( 18) 00:11:50.216 13544.107 - 13606.522: 98.3418% ( 12) 00:11:50.216 13606.522 - 13668.937: 98.4136% ( 9) 00:11:50.216 13668.937 - 13731.352: 98.4774% ( 8) 00:11:50.216 13731.352 - 13793.768: 98.5252% ( 6) 00:11:50.216 13793.768 - 13856.183: 98.5810% ( 7) 00:11:50.216 13856.183 - 13918.598: 98.6368% ( 7) 00:11:50.216 13918.598 - 13981.013: 98.6926% ( 7) 00:11:50.216 13981.013 - 14043.429: 98.7404% ( 6) 00:11:50.216 14043.429 - 14105.844: 98.8042% ( 8) 00:11:50.216 14105.844 - 14168.259: 98.8520% ( 6) 00:11:50.216 14168.259 - 14230.674: 98.8839% ( 4) 00:11:50.216 14230.674 - 14293.090: 98.9078% ( 3) 00:11:50.216 14293.090 - 14355.505: 98.9318% ( 3) 00:11:50.216 14355.505 - 14417.920: 98.9636% ( 4) 00:11:50.216 14417.920 - 14480.335: 98.9796% ( 2) 00:11:50.216 21720.503 - 21845.333: 98.9876% ( 1) 00:11:50.216 21845.333 - 21970.164: 99.0195% ( 4) 00:11:50.216 21970.164 - 22094.994: 99.0513% ( 4) 00:11:50.216 22094.994 - 22219.825: 99.0753% ( 3) 00:11:50.216 22219.825 - 22344.655: 99.1071% ( 4) 00:11:50.216 22344.655 - 22469.486: 99.1390% ( 4) 00:11:50.216 22469.486 - 22594.316: 99.1709% ( 4) 00:11:50.216 22594.316 - 22719.147: 99.1948% ( 3) 00:11:50.216 22719.147 - 22843.977: 99.2506% ( 7) 00:11:50.216 22843.977 - 22968.808: 99.3383% ( 11) 00:11:50.216 22968.808 - 23093.638: 99.5057% ( 21) 00:11:50.216 23093.638 - 23218.469: 99.6333% ( 16) 00:11:50.216 23218.469 - 23343.299: 99.6971% ( 8) 00:11:50.216 23343.299 - 23468.130: 99.7449% ( 6) 00:11:50.216 23468.130 - 23592.960: 99.7768% ( 4) 00:11:50.216 23592.960 - 23717.790: 99.8007% ( 3) 00:11:50.216 23717.790 - 23842.621: 99.8246% ( 3) 00:11:50.216 23842.621 - 23967.451: 99.8565% ( 4) 00:11:50.216 23967.451 - 24092.282: 99.8804% ( 3) 00:11:50.216 24092.282 - 24217.112: 99.9123% ( 4) 00:11:50.216 24217.112 - 24341.943: 99.9362% ( 3) 00:11:50.216 24341.943 - 24466.773: 99.9681% ( 4) 00:11:50.216 24466.773 - 24591.604: 100.0000% ( 4) 00:11:50.216 00:11:50.216 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:11:50.216 ============================================================================== 00:11:50.216 Range in us Cumulative IO count 00:11:50.216 7364.998 - 7396.206: 0.0080% ( 1) 00:11:50.216 7396.206 - 7427.413: 0.0159% ( 1) 00:11:50.216 7427.413 - 7458.621: 0.0399% ( 3) 00:11:50.216 7458.621 - 7489.829: 0.0717% ( 4) 00:11:50.216 7489.829 - 7521.036: 0.0877% ( 2) 00:11:50.216 7521.036 - 7552.244: 0.1116% ( 3) 00:11:50.216 7552.244 - 7583.451: 0.1355% ( 3) 00:11:50.216 7583.451 - 7614.659: 0.1754% ( 5) 00:11:50.216 7614.659 - 7645.867: 0.2152% ( 5) 00:11:50.216 7645.867 - 7677.074: 0.2312% ( 2) 00:11:50.216 7677.074 - 7708.282: 0.2631% ( 4) 00:11:50.216 7708.282 - 7739.490: 0.2950% ( 4) 00:11:50.216 7739.490 - 7770.697: 0.3348% ( 5) 00:11:50.216 7770.697 - 7801.905: 0.3747% ( 5) 00:11:50.216 7801.905 - 7833.112: 0.4225% ( 6) 00:11:50.216 7833.112 - 7864.320: 0.4464% ( 3) 00:11:50.216 7864.320 - 7895.528: 0.4863% ( 5) 00:11:50.216 7895.528 - 7926.735: 0.5820% ( 12) 00:11:50.216 7926.735 - 7957.943: 0.6138% ( 4) 00:11:50.216 7957.943 - 7989.150: 0.6537% ( 5) 00:11:50.216 7989.150 - 8051.566: 0.8211% ( 21) 00:11:50.216 8051.566 - 8113.981: 1.2994% ( 60) 00:11:50.216 8113.981 - 8176.396: 1.5944% ( 37) 00:11:50.216 8176.396 - 8238.811: 1.8256% ( 29) 00:11:50.216 8238.811 - 8301.227: 2.1205% ( 37) 00:11:50.216 8301.227 - 8363.642: 2.4235% ( 38) 00:11:50.216 8363.642 - 8426.057: 2.8300% ( 51) 00:11:50.216 8426.057 - 8488.472: 3.3881% ( 70) 00:11:50.216 8488.472 - 8550.888: 4.2809% ( 112) 00:11:50.216 8550.888 - 8613.303: 5.3332% ( 132) 00:11:50.216 8613.303 - 8675.718: 6.3377% ( 126) 00:11:50.216 8675.718 - 8738.133: 7.5335% ( 150) 00:11:50.216 8738.133 - 8800.549: 9.0880% ( 195) 00:11:50.216 8800.549 - 8862.964: 10.5469% ( 183) 00:11:50.216 8862.964 - 8925.379: 12.6754% ( 267) 00:11:50.216 8925.379 - 8987.794: 15.1865% ( 315) 00:11:50.216 8987.794 - 9050.210: 17.4904% ( 289) 00:11:50.216 9050.210 - 9112.625: 19.7624% ( 285) 00:11:50.216 9112.625 - 9175.040: 21.9547% ( 275) 00:11:50.216 9175.040 - 9237.455: 24.3383% ( 299) 00:11:50.216 9237.455 - 9299.870: 27.4633% ( 392) 00:11:50.216 9299.870 - 9362.286: 30.1260% ( 334) 00:11:50.216 9362.286 - 9424.701: 33.0277% ( 364) 00:11:50.216 9424.701 - 9487.116: 36.4477% ( 429) 00:11:50.216 9487.116 - 9549.531: 39.2937% ( 357) 00:11:50.216 9549.531 - 9611.947: 42.4107% ( 391) 00:11:50.216 9611.947 - 9674.362: 44.8182% ( 302) 00:11:50.216 9674.362 - 9736.777: 47.3613% ( 319) 00:11:50.216 9736.777 - 9799.192: 49.7608% ( 301) 00:11:50.216 9799.192 - 9861.608: 52.4235% ( 334) 00:11:50.216 9861.608 - 9924.023: 54.8390% ( 303) 00:11:50.216 9924.023 - 9986.438: 56.9037% ( 259) 00:11:50.216 9986.438 - 10048.853: 58.6974% ( 225) 00:11:50.216 10048.853 - 10111.269: 60.6346% ( 243) 00:11:50.216 10111.269 - 10173.684: 62.6276% ( 250) 00:11:50.216 10173.684 - 10236.099: 64.2379% ( 202) 00:11:50.216 10236.099 - 10298.514: 65.7446% ( 189) 00:11:50.216 10298.514 - 10360.930: 67.0599% ( 165) 00:11:50.216 10360.930 - 10423.345: 68.4869% ( 179) 00:11:50.216 10423.345 - 10485.760: 70.0415% ( 195) 00:11:50.216 10485.760 - 10548.175: 71.2054% ( 146) 00:11:50.216 10548.175 - 10610.590: 72.2975% ( 137) 00:11:50.216 10610.590 - 10673.006: 73.3578% ( 133) 00:11:50.216 10673.006 - 10735.421: 74.3702% ( 127) 00:11:50.216 10735.421 - 10797.836: 75.3986% ( 129) 00:11:50.216 10797.836 - 10860.251: 76.5466% ( 144) 00:11:50.216 10860.251 - 10922.667: 77.6945% ( 144) 00:11:50.216 10922.667 - 10985.082: 79.2650% ( 197) 00:11:50.216 10985.082 - 11047.497: 80.3651% ( 138) 00:11:50.216 11047.497 - 11109.912: 81.2978% ( 117) 00:11:50.216 11109.912 - 11172.328: 82.3103% ( 127) 00:11:50.216 11172.328 - 11234.743: 83.0756% ( 96) 00:11:50.216 11234.743 - 11297.158: 83.9684% ( 112) 00:11:50.216 11297.158 - 11359.573: 84.5823% ( 77) 00:11:50.216 11359.573 - 11421.989: 85.2041% ( 78) 00:11:50.216 11421.989 - 11484.404: 85.8259% ( 78) 00:11:50.216 11484.404 - 11546.819: 86.5115% ( 86) 00:11:50.216 11546.819 - 11609.234: 87.1333% ( 78) 00:11:50.216 11609.234 - 11671.650: 87.7312% ( 75) 00:11:50.216 11671.650 - 11734.065: 88.3052% ( 72) 00:11:50.217 11734.065 - 11796.480: 88.8951% ( 74) 00:11:50.217 11796.480 - 11858.895: 89.4292% ( 67) 00:11:50.217 11858.895 - 11921.310: 89.9474% ( 65) 00:11:50.217 11921.310 - 11983.726: 90.5134% ( 71) 00:11:50.217 11983.726 - 12046.141: 91.0555% ( 68) 00:11:50.217 12046.141 - 12108.556: 91.5418% ( 61) 00:11:50.217 12108.556 - 12170.971: 92.0041% ( 58) 00:11:50.217 12170.971 - 12233.387: 92.4267% ( 53) 00:11:50.217 12233.387 - 12295.802: 92.8571% ( 54) 00:11:50.217 12295.802 - 12358.217: 93.2318% ( 47) 00:11:50.217 12358.217 - 12420.632: 93.7659% ( 67) 00:11:50.217 12420.632 - 12483.048: 94.1805% ( 52) 00:11:50.217 12483.048 - 12545.463: 94.5950% ( 52) 00:11:50.217 12545.463 - 12607.878: 95.0096% ( 52) 00:11:50.217 12607.878 - 12670.293: 95.3444% ( 42) 00:11:50.217 12670.293 - 12732.709: 95.6234% ( 35) 00:11:50.217 12732.709 - 12795.124: 95.9104% ( 36) 00:11:50.217 12795.124 - 12857.539: 96.1894% ( 35) 00:11:50.217 12857.539 - 12919.954: 96.4365% ( 31) 00:11:50.217 12919.954 - 12982.370: 96.6438% ( 26) 00:11:50.217 12982.370 - 13044.785: 96.8192% ( 22) 00:11:50.217 13044.785 - 13107.200: 97.0026% ( 23) 00:11:50.217 13107.200 - 13169.615: 97.1620% ( 20) 00:11:50.217 13169.615 - 13232.030: 97.3533% ( 24) 00:11:50.217 13232.030 - 13294.446: 97.4809% ( 16) 00:11:50.217 13294.446 - 13356.861: 97.6084% ( 16) 00:11:50.217 13356.861 - 13419.276: 97.7121% ( 13) 00:11:50.217 13419.276 - 13481.691: 97.8316% ( 15) 00:11:50.217 13481.691 - 13544.107: 97.9432% ( 14) 00:11:50.217 13544.107 - 13606.522: 98.0469% ( 13) 00:11:50.217 13606.522 - 13668.937: 98.1585% ( 14) 00:11:50.217 13668.937 - 13731.352: 98.2701% ( 14) 00:11:50.217 13731.352 - 13793.768: 98.3737% ( 13) 00:11:50.217 13793.768 - 13856.183: 98.4853% ( 14) 00:11:50.217 13856.183 - 13918.598: 98.5969% ( 14) 00:11:50.217 13918.598 - 13981.013: 98.7006% ( 13) 00:11:50.217 13981.013 - 14043.429: 98.7883% ( 11) 00:11:50.217 14043.429 - 14105.844: 98.8680% ( 10) 00:11:50.217 14105.844 - 14168.259: 98.9238% ( 7) 00:11:50.217 14168.259 - 14230.674: 98.9477% ( 3) 00:11:50.217 14230.674 - 14293.090: 98.9796% ( 4) 00:11:50.217 19723.215 - 19848.046: 98.9876% ( 1) 00:11:50.217 20597.029 - 20721.859: 99.0115% ( 3) 00:11:50.217 20721.859 - 20846.690: 99.0912% ( 10) 00:11:50.217 20846.690 - 20971.520: 99.1629% ( 9) 00:11:50.217 20971.520 - 21096.350: 99.2427% ( 10) 00:11:50.217 21096.350 - 21221.181: 99.3782% ( 17) 00:11:50.217 21221.181 - 21346.011: 99.5137% ( 17) 00:11:50.217 21346.011 - 21470.842: 99.6173% ( 13) 00:11:50.217 21470.842 - 21595.672: 99.6811% ( 8) 00:11:50.217 21595.672 - 21720.503: 99.7050% ( 3) 00:11:50.217 21720.503 - 21845.333: 99.7369% ( 4) 00:11:50.217 21845.333 - 21970.164: 99.7608% ( 3) 00:11:50.217 21970.164 - 22094.994: 99.7927% ( 4) 00:11:50.217 22094.994 - 22219.825: 99.8166% ( 3) 00:11:50.217 22219.825 - 22344.655: 99.8406% ( 3) 00:11:50.217 22344.655 - 22469.486: 99.8724% ( 4) 00:11:50.217 22469.486 - 22594.316: 99.8964% ( 3) 00:11:50.217 22594.316 - 22719.147: 99.9283% ( 4) 00:11:50.217 22719.147 - 22843.977: 99.9601% ( 4) 00:11:50.217 22843.977 - 22968.808: 99.9841% ( 3) 00:11:50.217 22968.808 - 23093.638: 100.0000% ( 2) 00:11:50.217 00:11:50.217 ************************************ 00:11:50.217 END TEST nvme_perf 00:11:50.217 ************************************ 00:11:50.217 05:09:09 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:50.217 00:11:50.217 real 0m2.846s 00:11:50.217 user 0m2.444s 00:11:50.217 sys 0m0.302s 00:11:50.217 05:09:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.217 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:11:50.217 05:09:09 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:50.217 05:09:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:50.217 05:09:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.217 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:11:50.217 ************************************ 00:11:50.217 START TEST nvme_hello_world 00:11:50.217 ************************************ 00:11:50.217 05:09:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:50.785 Initializing NVMe Controllers 00:11:50.785 Attached to 0000:00:06.0 00:11:50.785 Namespace ID: 1 size: 6GB 00:11:50.785 Attached to 0000:00:07.0 00:11:50.785 Namespace ID: 1 size: 5GB 00:11:50.785 Attached to 0000:00:09.0 00:11:50.785 Namespace ID: 1 size: 1GB 00:11:50.785 Attached to 0000:00:08.0 00:11:50.785 Namespace ID: 1 size: 4GB 00:11:50.785 Namespace ID: 2 size: 4GB 00:11:50.785 Namespace ID: 3 size: 4GB 00:11:50.785 Initialization complete. 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 INFO: using host memory buffer for IO 00:11:50.785 Hello world! 00:11:50.785 00:11:50.785 real 0m0.355s 00:11:50.785 user 0m0.181s 00:11:50.785 sys 0m0.131s 00:11:50.785 05:09:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.785 ************************************ 00:11:50.785 END TEST nvme_hello_world 00:11:50.785 ************************************ 00:11:50.785 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:11:50.785 05:09:09 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:50.785 05:09:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:50.785 05:09:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.785 05:09:09 -- common/autotest_common.sh@10 -- # set +x 00:11:50.785 ************************************ 00:11:50.785 START TEST nvme_sgl 00:11:50.785 ************************************ 00:11:50.785 05:09:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:51.044 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:51.044 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:51.044 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:51.044 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:51.044 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:51.044 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:51.044 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:11:51.044 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:11:51.044 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:11:51.303 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:11:51.303 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:11:51.303 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:11:51.303 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:11:51.303 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:11:51.303 NVMe Readv/Writev Request test 00:11:51.303 Attached to 0000:00:06.0 00:11:51.303 Attached to 0000:00:07.0 00:11:51.303 Attached to 0000:00:09.0 00:11:51.303 Attached to 0000:00:08.0 00:11:51.303 0000:00:06.0: build_io_request_2 test passed 00:11:51.303 0000:00:06.0: build_io_request_4 test passed 00:11:51.303 0000:00:06.0: build_io_request_5 test passed 00:11:51.303 0000:00:06.0: build_io_request_6 test passed 00:11:51.303 0000:00:06.0: build_io_request_7 test passed 00:11:51.303 0000:00:06.0: build_io_request_10 test passed 00:11:51.303 0000:00:07.0: build_io_request_2 test passed 00:11:51.303 0000:00:07.0: build_io_request_4 test passed 00:11:51.303 0000:00:07.0: build_io_request_5 test passed 00:11:51.303 0000:00:07.0: build_io_request_6 test passed 00:11:51.303 0000:00:07.0: build_io_request_7 test passed 00:11:51.303 0000:00:07.0: build_io_request_10 test passed 00:11:51.303 Cleaning up... 00:11:51.303 00:11:51.303 real 0m0.577s 00:11:51.303 user 0m0.343s 00:11:51.303 sys 0m0.184s 00:11:51.303 05:09:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.303 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 ************************************ 00:11:51.303 END TEST nvme_sgl 00:11:51.303 ************************************ 00:11:51.303 05:09:10 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:51.303 05:09:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:51.303 05:09:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:51.303 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.303 ************************************ 00:11:51.303 START TEST nvme_e2edp 00:11:51.303 ************************************ 00:11:51.303 05:09:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:51.562 NVMe Write/Read with End-to-End data protection test 00:11:51.562 Attached to 0000:00:06.0 00:11:51.562 Attached to 0000:00:07.0 00:11:51.562 Attached to 0000:00:09.0 00:11:51.562 Attached to 0000:00:08.0 00:11:51.562 Cleaning up... 00:11:51.562 ************************************ 00:11:51.562 END TEST nvme_e2edp 00:11:51.562 ************************************ 00:11:51.562 00:11:51.562 real 0m0.313s 00:11:51.562 user 0m0.112s 00:11:51.562 sys 0m0.156s 00:11:51.562 05:09:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.562 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.562 05:09:10 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:51.562 05:09:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:51.562 05:09:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:51.563 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.563 ************************************ 00:11:51.563 START TEST nvme_reserve 00:11:51.563 ************************************ 00:11:51.563 05:09:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:51.822 ===================================================== 00:11:51.822 NVMe Controller at PCI bus 0, device 6, function 0 00:11:51.822 ===================================================== 00:11:51.822 Reservations: Not Supported 00:11:51.822 ===================================================== 00:11:51.822 NVMe Controller at PCI bus 0, device 7, function 0 00:11:51.822 ===================================================== 00:11:51.822 Reservations: Not Supported 00:11:51.822 ===================================================== 00:11:51.822 NVMe Controller at PCI bus 0, device 9, function 0 00:11:51.822 ===================================================== 00:11:51.822 Reservations: Not Supported 00:11:51.822 ===================================================== 00:11:51.822 NVMe Controller at PCI bus 0, device 8, function 0 00:11:51.822 ===================================================== 00:11:51.822 Reservations: Not Supported 00:11:51.822 Reservation test passed 00:11:51.822 00:11:51.822 real 0m0.251s 00:11:51.822 user 0m0.076s 00:11:51.822 sys 0m0.129s 00:11:51.822 05:09:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.822 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:51.822 ************************************ 00:11:51.822 END TEST nvme_reserve 00:11:51.822 ************************************ 00:11:52.081 05:09:10 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:52.081 05:09:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:52.081 05:09:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.081 05:09:10 -- common/autotest_common.sh@10 -- # set +x 00:11:52.081 ************************************ 00:11:52.081 START TEST nvme_err_injection 00:11:52.081 ************************************ 00:11:52.081 05:09:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:52.340 NVMe Error Injection test 00:11:52.340 Attached to 0000:00:06.0 00:11:52.340 Attached to 0000:00:07.0 00:11:52.340 Attached to 0000:00:09.0 00:11:52.340 Attached to 0000:00:08.0 00:11:52.340 0000:00:06.0: get features failed as expected 00:11:52.340 0000:00:07.0: get features failed as expected 00:11:52.340 0000:00:09.0: get features failed as expected 00:11:52.340 0000:00:08.0: get features failed as expected 00:11:52.340 0000:00:08.0: get features successfully as expected 00:11:52.340 0000:00:06.0: get features successfully as expected 00:11:52.340 0000:00:07.0: get features successfully as expected 00:11:52.340 0000:00:09.0: get features successfully as expected 00:11:52.340 0000:00:09.0: read failed as expected 00:11:52.340 0000:00:08.0: read failed as expected 00:11:52.340 0000:00:06.0: read failed as expected 00:11:52.340 0000:00:07.0: read failed as expected 00:11:52.340 0000:00:06.0: read successfully as expected 00:11:52.340 0000:00:07.0: read successfully as expected 00:11:52.340 0000:00:09.0: read successfully as expected 00:11:52.340 0000:00:08.0: read successfully as expected 00:11:52.340 Cleaning up... 00:11:52.340 00:11:52.340 real 0m0.401s 00:11:52.340 user 0m0.184s 00:11:52.340 sys 0m0.172s 00:11:52.340 ************************************ 00:11:52.340 END TEST nvme_err_injection 00:11:52.340 ************************************ 00:11:52.340 05:09:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.340 05:09:11 -- common/autotest_common.sh@10 -- # set +x 00:11:52.340 05:09:11 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:52.340 05:09:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:52.340 05:09:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.340 05:09:11 -- common/autotest_common.sh@10 -- # set +x 00:11:52.341 ************************************ 00:11:52.341 START TEST nvme_overhead 00:11:52.341 ************************************ 00:11:52.341 05:09:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:53.719 Initializing NVMe Controllers 00:11:53.719 Attached to 0000:00:06.0 00:11:53.719 Attached to 0000:00:07.0 00:11:53.719 Attached to 0000:00:09.0 00:11:53.719 Attached to 0000:00:08.0 00:11:53.719 Initialization complete. Launching workers. 00:11:53.719 submit (in ns) avg, min, max = 13367.4, 12039.0, 49293.3 00:11:53.719 complete (in ns) avg, min, max = 8985.6, 7910.5, 101979.0 00:11:53.719 00:11:53.719 Submit histogram 00:11:53.719 ================ 00:11:53.719 Range in us Cumulative Count 00:11:53.719 12.008 - 12.069: 0.0258% ( 2) 00:11:53.719 12.069 - 12.130: 0.1938% ( 13) 00:11:53.719 12.130 - 12.190: 1.0205% ( 64) 00:11:53.719 12.190 - 12.251: 3.0616% ( 158) 00:11:53.719 12.251 - 12.312: 6.8337% ( 292) 00:11:53.719 12.312 - 12.373: 12.0656% ( 405) 00:11:53.719 12.373 - 12.434: 18.1889% ( 474) 00:11:53.719 12.434 - 12.495: 24.1958% ( 465) 00:11:53.719 12.495 - 12.556: 30.3449% ( 476) 00:11:53.719 12.556 - 12.617: 36.4811% ( 475) 00:11:53.719 12.617 - 12.678: 42.5139% ( 467) 00:11:53.719 12.678 - 12.739: 48.4434% ( 459) 00:11:53.719 12.739 - 12.800: 53.9336% ( 425) 00:11:53.719 12.800 - 12.861: 57.9253% ( 309) 00:11:53.719 12.861 - 12.922: 61.4133% ( 270) 00:11:53.719 12.922 - 12.983: 63.4543% ( 158) 00:11:53.719 12.983 - 13.044: 65.2112% ( 136) 00:11:53.719 13.044 - 13.105: 66.4384% ( 95) 00:11:53.719 13.105 - 13.166: 67.6269% ( 92) 00:11:53.719 13.166 - 13.227: 68.8671% ( 96) 00:11:53.719 13.227 - 13.288: 70.2881% ( 110) 00:11:53.719 13.288 - 13.349: 71.6703% ( 107) 00:11:53.719 13.349 - 13.410: 73.0267% ( 105) 00:11:53.719 13.410 - 13.470: 74.4865% ( 113) 00:11:53.719 13.470 - 13.531: 76.0496% ( 121) 00:11:53.719 13.531 - 13.592: 77.9357% ( 146) 00:11:53.719 13.592 - 13.653: 80.0413% ( 163) 00:11:53.719 13.653 - 13.714: 82.5216% ( 192) 00:11:53.719 13.714 - 13.775: 84.3431% ( 141) 00:11:53.719 13.775 - 13.836: 85.6478% ( 101) 00:11:53.719 13.836 - 13.897: 86.5005% ( 66) 00:11:53.719 13.897 - 13.958: 87.3272% ( 64) 00:11:53.719 13.958 - 14.019: 87.8956% ( 44) 00:11:53.719 14.019 - 14.080: 88.5674% ( 52) 00:11:53.719 14.080 - 14.141: 88.9549% ( 30) 00:11:53.719 14.141 - 14.202: 89.4329% ( 37) 00:11:53.719 14.202 - 14.263: 89.8463% ( 32) 00:11:53.719 14.263 - 14.324: 90.0917% ( 19) 00:11:53.719 14.324 - 14.385: 90.6343% ( 42) 00:11:53.719 14.385 - 14.446: 90.9960% ( 28) 00:11:53.719 14.446 - 14.507: 91.3577% ( 28) 00:11:53.719 14.507 - 14.568: 91.7323% ( 29) 00:11:53.719 14.568 - 14.629: 91.9132% ( 14) 00:11:53.719 14.629 - 14.690: 92.2361% ( 25) 00:11:53.719 14.690 - 14.750: 92.3007% ( 5) 00:11:53.719 14.750 - 14.811: 92.4687% ( 13) 00:11:53.719 14.811 - 14.872: 92.5203% ( 4) 00:11:53.719 14.872 - 14.933: 92.5591% ( 3) 00:11:53.719 14.933 - 14.994: 92.6624% ( 8) 00:11:53.719 14.994 - 15.055: 92.7658% ( 8) 00:11:53.719 15.055 - 15.116: 92.8821% ( 9) 00:11:53.719 15.116 - 15.177: 92.9854% ( 8) 00:11:53.719 15.177 - 15.238: 93.0629% ( 6) 00:11:53.719 15.238 - 15.299: 93.1533% ( 7) 00:11:53.719 15.299 - 15.360: 93.2050% ( 4) 00:11:53.719 15.360 - 15.421: 93.2696% ( 5) 00:11:53.719 15.421 - 15.482: 93.3084% ( 3) 00:11:53.719 15.482 - 15.543: 93.3600% ( 4) 00:11:53.719 15.543 - 15.604: 93.4117% ( 4) 00:11:53.719 15.604 - 15.726: 93.4505% ( 3) 00:11:53.719 15.726 - 15.848: 93.5150% ( 5) 00:11:53.719 15.848 - 15.970: 93.5538% ( 3) 00:11:53.719 15.970 - 16.091: 93.6184% ( 5) 00:11:53.719 16.091 - 16.213: 93.6959% ( 6) 00:11:53.719 16.213 - 16.335: 93.7863% ( 7) 00:11:53.719 16.335 - 16.457: 93.8380% ( 4) 00:11:53.719 16.457 - 16.579: 93.9930% ( 12) 00:11:53.719 16.579 - 16.701: 94.0964% ( 8) 00:11:53.719 16.701 - 16.823: 94.2772% ( 14) 00:11:53.719 16.823 - 16.945: 94.4968% ( 17) 00:11:53.719 16.945 - 17.067: 94.6777% ( 14) 00:11:53.719 17.067 - 17.189: 94.8715% ( 15) 00:11:53.719 17.189 - 17.310: 95.0911% ( 17) 00:11:53.719 17.310 - 17.432: 95.2203% ( 10) 00:11:53.719 17.432 - 17.554: 95.4786% ( 20) 00:11:53.719 17.554 - 17.676: 95.6078% ( 10) 00:11:53.719 17.676 - 17.798: 95.8403% ( 18) 00:11:53.719 17.798 - 17.920: 95.9953% ( 12) 00:11:53.719 17.920 - 18.042: 96.1633% ( 13) 00:11:53.719 18.042 - 18.164: 96.3183% ( 12) 00:11:53.719 18.164 - 18.286: 96.4992% ( 14) 00:11:53.719 18.286 - 18.408: 96.5767% ( 6) 00:11:53.719 18.408 - 18.530: 96.7575% ( 14) 00:11:53.719 18.530 - 18.651: 96.9125% ( 12) 00:11:53.719 18.651 - 18.773: 97.1063% ( 15) 00:11:53.719 18.773 - 18.895: 97.2355% ( 10) 00:11:53.719 18.895 - 19.017: 97.3647% ( 10) 00:11:53.719 19.017 - 19.139: 97.5068% ( 11) 00:11:53.719 19.139 - 19.261: 97.6230% ( 9) 00:11:53.719 19.261 - 19.383: 97.7522% ( 10) 00:11:53.719 19.383 - 19.505: 97.8556% ( 8) 00:11:53.719 19.505 - 19.627: 97.9460% ( 7) 00:11:53.719 19.627 - 19.749: 98.1010% ( 12) 00:11:53.719 19.749 - 19.870: 98.1785% ( 6) 00:11:53.719 19.870 - 19.992: 98.2819% ( 8) 00:11:53.719 19.992 - 20.114: 98.3594% ( 6) 00:11:53.719 20.114 - 20.236: 98.4111% ( 4) 00:11:53.719 20.236 - 20.358: 98.4498% ( 3) 00:11:53.719 20.358 - 20.480: 98.5015% ( 4) 00:11:53.719 20.480 - 20.602: 98.5402% ( 3) 00:11:53.719 20.602 - 20.724: 98.5790% ( 3) 00:11:53.719 20.724 - 20.846: 98.6436% ( 5) 00:11:53.719 20.846 - 20.968: 98.6823% ( 3) 00:11:53.719 20.968 - 21.090: 98.7082% ( 2) 00:11:53.719 21.090 - 21.211: 98.7340% ( 2) 00:11:53.719 21.211 - 21.333: 98.7469% ( 1) 00:11:53.719 21.455 - 21.577: 98.7728% ( 2) 00:11:53.719 21.577 - 21.699: 98.7986% ( 2) 00:11:53.719 21.699 - 21.821: 98.8244% ( 2) 00:11:53.719 21.821 - 21.943: 98.8503% ( 2) 00:11:53.719 21.943 - 22.065: 98.8632% ( 1) 00:11:53.719 22.187 - 22.309: 98.8890% ( 2) 00:11:53.719 22.309 - 22.430: 98.9020% ( 1) 00:11:53.719 22.430 - 22.552: 98.9149% ( 1) 00:11:53.719 22.552 - 22.674: 98.9278% ( 1) 00:11:53.719 22.796 - 22.918: 98.9665% ( 3) 00:11:53.719 23.040 - 23.162: 99.0311% ( 5) 00:11:53.719 23.406 - 23.528: 99.0441% ( 1) 00:11:53.719 23.650 - 23.771: 99.0570% ( 1) 00:11:53.719 23.771 - 23.893: 99.0699% ( 1) 00:11:53.719 23.893 - 24.015: 99.0828% ( 1) 00:11:53.719 24.503 - 24.625: 99.0957% ( 1) 00:11:53.719 24.625 - 24.747: 99.1086% ( 1) 00:11:53.719 24.747 - 24.869: 99.1603% ( 4) 00:11:53.719 24.869 - 24.990: 99.1991% ( 3) 00:11:53.719 24.990 - 25.112: 99.2637% ( 5) 00:11:53.719 25.112 - 25.234: 99.3283% ( 5) 00:11:53.719 25.234 - 25.356: 99.3799% ( 4) 00:11:53.719 25.356 - 25.478: 99.4058% ( 2) 00:11:53.719 25.478 - 25.600: 99.4833% ( 6) 00:11:53.719 25.600 - 25.722: 99.5349% ( 4) 00:11:53.719 25.722 - 25.844: 99.5866% ( 4) 00:11:53.719 25.844 - 25.966: 99.5995% ( 1) 00:11:53.719 25.966 - 26.088: 99.6383% ( 3) 00:11:53.719 26.088 - 26.210: 99.6512% ( 1) 00:11:53.720 26.210 - 26.331: 99.6641% ( 1) 00:11:53.720 26.453 - 26.575: 99.6770% ( 1) 00:11:53.720 26.819 - 26.941: 99.6900% ( 1) 00:11:53.720 27.307 - 27.429: 99.7029% ( 1) 00:11:53.720 27.794 - 27.916: 99.7158% ( 1) 00:11:53.720 27.916 - 28.038: 99.7287% ( 1) 00:11:53.720 28.526 - 28.648: 99.7416% ( 1) 00:11:53.720 28.770 - 28.891: 99.7675% ( 2) 00:11:53.720 29.257 - 29.379: 99.7804% ( 1) 00:11:53.720 29.501 - 29.623: 99.7933% ( 1) 00:11:53.720 29.745 - 29.867: 99.8191% ( 2) 00:11:53.720 29.989 - 30.110: 99.8321% ( 1) 00:11:53.720 30.354 - 30.476: 99.8450% ( 1) 00:11:53.720 30.476 - 30.598: 99.8579% ( 1) 00:11:53.720 30.598 - 30.720: 99.8708% ( 1) 00:11:53.720 30.964 - 31.086: 99.8837% ( 1) 00:11:53.720 31.208 - 31.451: 99.8967% ( 1) 00:11:53.720 31.451 - 31.695: 99.9096% ( 1) 00:11:53.720 32.427 - 32.670: 99.9225% ( 1) 00:11:53.720 34.133 - 34.377: 99.9354% ( 1) 00:11:53.720 34.865 - 35.109: 99.9483% ( 1) 00:11:53.720 35.352 - 35.596: 99.9612% ( 1) 00:11:53.720 36.084 - 36.328: 99.9742% ( 1) 00:11:53.720 44.617 - 44.861: 99.9871% ( 1) 00:11:53.720 49.250 - 49.493: 100.0000% ( 1) 00:11:53.720 00:11:53.720 Complete histogram 00:11:53.720 ================== 00:11:53.720 Range in us Cumulative Count 00:11:53.720 7.863 - 7.924: 0.0517% ( 4) 00:11:53.720 7.924 - 7.985: 1.2401% ( 92) 00:11:53.720 7.985 - 8.046: 4.8702% ( 281) 00:11:53.720 8.046 - 8.107: 10.3217% ( 422) 00:11:53.720 8.107 - 8.168: 14.6880% ( 338) 00:11:53.720 8.168 - 8.229: 17.4654% ( 215) 00:11:53.720 8.229 - 8.290: 19.0931% ( 126) 00:11:53.720 8.290 - 8.350: 20.4366% ( 104) 00:11:53.720 8.350 - 8.411: 23.4078% ( 230) 00:11:53.720 8.411 - 8.472: 29.1177% ( 442) 00:11:53.720 8.472 - 8.533: 37.2303% ( 628) 00:11:53.720 8.533 - 8.594: 46.8157% ( 742) 00:11:53.720 8.594 - 8.655: 54.2953% ( 579) 00:11:53.720 8.655 - 8.716: 61.0515% ( 523) 00:11:53.720 8.716 - 8.777: 67.6915% ( 514) 00:11:53.720 8.777 - 8.838: 72.3421% ( 360) 00:11:53.720 8.838 - 8.899: 75.1453% ( 217) 00:11:53.720 8.899 - 8.960: 77.0314% ( 146) 00:11:53.720 8.960 - 9.021: 79.3567% ( 180) 00:11:53.720 9.021 - 9.082: 80.8035% ( 112) 00:11:53.720 9.082 - 9.143: 81.9403% ( 88) 00:11:53.720 9.143 - 9.204: 83.1159% ( 91) 00:11:53.720 9.204 - 9.265: 84.6402% ( 118) 00:11:53.720 9.265 - 9.326: 86.2808% ( 127) 00:11:53.720 9.326 - 9.387: 87.5597% ( 99) 00:11:53.720 9.387 - 9.448: 88.7353% ( 91) 00:11:53.720 9.448 - 9.509: 90.0271% ( 100) 00:11:53.720 9.509 - 9.570: 90.8281% ( 62) 00:11:53.720 9.570 - 9.630: 91.2414% ( 32) 00:11:53.720 9.630 - 9.691: 91.6161% ( 29) 00:11:53.720 9.691 - 9.752: 92.0682% ( 35) 00:11:53.720 9.752 - 9.813: 92.6883% ( 48) 00:11:53.720 9.813 - 9.874: 93.1404% ( 35) 00:11:53.720 9.874 - 9.935: 93.5667% ( 33) 00:11:53.720 9.935 - 9.996: 93.7863% ( 17) 00:11:53.720 9.996 - 10.057: 94.1351% ( 27) 00:11:53.720 10.057 - 10.118: 94.3806% ( 19) 00:11:53.720 10.118 - 10.179: 94.6519% ( 21) 00:11:53.720 10.179 - 10.240: 94.7940% ( 11) 00:11:53.720 10.240 - 10.301: 94.9748% ( 14) 00:11:53.720 10.301 - 10.362: 95.0652% ( 7) 00:11:53.720 10.362 - 10.423: 95.2203% ( 12) 00:11:53.720 10.423 - 10.484: 95.3236% ( 8) 00:11:53.720 10.484 - 10.545: 95.4269% ( 8) 00:11:53.720 10.545 - 10.606: 95.5045% ( 6) 00:11:53.720 10.606 - 10.667: 95.5820% ( 6) 00:11:53.720 10.667 - 10.728: 95.6078% ( 2) 00:11:53.720 10.728 - 10.789: 95.6724% ( 5) 00:11:53.720 10.789 - 10.850: 95.7499% ( 6) 00:11:53.720 10.850 - 10.910: 95.8274% ( 6) 00:11:53.720 10.910 - 10.971: 95.8791% ( 4) 00:11:53.720 10.971 - 11.032: 95.9566% ( 6) 00:11:53.720 11.032 - 11.093: 96.0341% ( 6) 00:11:53.720 11.093 - 11.154: 96.0729% ( 3) 00:11:53.720 11.215 - 11.276: 96.1116% ( 3) 00:11:53.720 11.276 - 11.337: 96.1374% ( 2) 00:11:53.720 11.337 - 11.398: 96.1762% ( 3) 00:11:53.720 11.398 - 11.459: 96.2020% ( 2) 00:11:53.720 11.459 - 11.520: 96.2279% ( 2) 00:11:53.720 11.520 - 11.581: 96.2796% ( 4) 00:11:53.720 11.581 - 11.642: 96.3700% ( 7) 00:11:53.720 11.642 - 11.703: 96.4346% ( 5) 00:11:53.720 11.703 - 11.764: 96.4862% ( 4) 00:11:53.720 11.764 - 11.825: 96.5121% ( 2) 00:11:53.720 11.825 - 11.886: 96.5638% ( 4) 00:11:53.720 11.886 - 11.947: 96.6283% ( 5) 00:11:53.720 11.947 - 12.008: 96.6929% ( 5) 00:11:53.720 12.008 - 12.069: 96.7188% ( 2) 00:11:53.720 12.069 - 12.130: 96.7704% ( 4) 00:11:53.720 12.130 - 12.190: 96.8092% ( 3) 00:11:53.720 12.190 - 12.251: 96.8609% ( 4) 00:11:53.720 12.251 - 12.312: 96.9255% ( 5) 00:11:53.720 12.312 - 12.373: 96.9513% ( 2) 00:11:53.720 12.373 - 12.434: 96.9642% ( 1) 00:11:53.720 12.434 - 12.495: 96.9771% ( 1) 00:11:53.720 12.495 - 12.556: 97.0030% ( 2) 00:11:53.720 12.556 - 12.617: 97.0159% ( 1) 00:11:53.720 12.678 - 12.739: 97.0417% ( 2) 00:11:53.720 12.800 - 12.861: 97.0676% ( 2) 00:11:53.720 12.861 - 12.922: 97.0934% ( 2) 00:11:53.720 12.922 - 12.983: 97.1451% ( 4) 00:11:53.720 12.983 - 13.044: 97.1580% ( 1) 00:11:53.720 13.044 - 13.105: 97.2097% ( 4) 00:11:53.720 13.105 - 13.166: 97.2226% ( 1) 00:11:53.720 13.227 - 13.288: 97.2355% ( 1) 00:11:53.720 13.288 - 13.349: 97.2484% ( 1) 00:11:53.720 13.349 - 13.410: 97.2613% ( 1) 00:11:53.720 13.410 - 13.470: 97.3130% ( 4) 00:11:53.720 13.470 - 13.531: 97.3388% ( 2) 00:11:53.720 13.531 - 13.592: 97.3647% ( 2) 00:11:53.720 13.592 - 13.653: 97.3905% ( 2) 00:11:53.720 13.714 - 13.775: 97.4164% ( 2) 00:11:53.720 13.775 - 13.836: 97.4809% ( 5) 00:11:53.720 13.836 - 13.897: 97.5197% ( 3) 00:11:53.720 13.897 - 13.958: 97.5585% ( 3) 00:11:53.720 13.958 - 14.019: 97.5843% ( 2) 00:11:53.720 14.019 - 14.080: 97.6230% ( 3) 00:11:53.720 14.080 - 14.141: 97.7006% ( 6) 00:11:53.720 14.141 - 14.202: 97.7393% ( 3) 00:11:53.720 14.202 - 14.263: 97.8168% ( 6) 00:11:53.720 14.324 - 14.385: 97.8814% ( 5) 00:11:53.720 14.385 - 14.446: 97.9589% ( 6) 00:11:53.720 14.446 - 14.507: 97.9977% ( 3) 00:11:53.720 14.507 - 14.568: 98.0623% ( 5) 00:11:53.720 14.568 - 14.629: 98.0881% ( 2) 00:11:53.720 14.629 - 14.690: 98.1785% ( 7) 00:11:53.720 14.690 - 14.750: 98.2173% ( 3) 00:11:53.720 14.750 - 14.811: 98.2690% ( 4) 00:11:53.720 14.872 - 14.933: 98.2819% ( 1) 00:11:53.720 14.933 - 14.994: 98.3206% ( 3) 00:11:53.720 14.994 - 15.055: 98.3335% ( 1) 00:11:53.720 15.055 - 15.116: 98.3594% ( 2) 00:11:53.720 15.116 - 15.177: 98.3852% ( 2) 00:11:53.720 15.177 - 15.238: 98.4627% ( 6) 00:11:53.720 15.238 - 15.299: 98.4886% ( 2) 00:11:53.720 15.299 - 15.360: 98.5402% ( 4) 00:11:53.720 15.360 - 15.421: 98.5919% ( 4) 00:11:53.720 15.421 - 15.482: 98.6177% ( 2) 00:11:53.720 15.482 - 15.543: 98.6565% ( 3) 00:11:53.720 15.543 - 15.604: 98.6694% ( 1) 00:11:53.720 15.604 - 15.726: 98.7211% ( 4) 00:11:53.720 15.726 - 15.848: 98.7599% ( 3) 00:11:53.720 15.970 - 16.091: 98.7857% ( 2) 00:11:53.720 16.091 - 16.213: 98.7986% ( 1) 00:11:53.720 16.213 - 16.335: 98.8632% ( 5) 00:11:53.720 16.335 - 16.457: 98.8890% ( 2) 00:11:53.720 16.579 - 16.701: 98.9149% ( 2) 00:11:53.720 16.701 - 16.823: 98.9278% ( 1) 00:11:53.720 16.823 - 16.945: 98.9665% ( 3) 00:11:53.720 17.067 - 17.189: 98.9795% ( 1) 00:11:53.720 17.189 - 17.310: 98.9924% ( 1) 00:11:53.720 17.310 - 17.432: 99.0053% ( 1) 00:11:53.720 17.432 - 17.554: 99.0311% ( 2) 00:11:53.720 17.676 - 17.798: 99.0441% ( 1) 00:11:53.720 19.017 - 19.139: 99.0570% ( 1) 00:11:53.720 19.139 - 19.261: 99.0699% ( 1) 00:11:53.720 19.261 - 19.383: 99.0828% ( 1) 00:11:53.720 20.236 - 20.358: 99.0957% ( 1) 00:11:53.720 20.358 - 20.480: 99.2507% ( 12) 00:11:53.720 20.480 - 20.602: 99.2766% ( 2) 00:11:53.720 20.602 - 20.724: 99.2895% ( 1) 00:11:53.720 20.724 - 20.846: 99.3412% ( 4) 00:11:53.720 20.846 - 20.968: 99.4574% ( 9) 00:11:53.720 20.968 - 21.090: 99.5220% ( 5) 00:11:53.720 21.090 - 21.211: 99.5479% ( 2) 00:11:53.720 21.211 - 21.333: 99.5608% ( 1) 00:11:53.720 21.333 - 21.455: 99.5995% ( 3) 00:11:53.720 21.455 - 21.577: 99.6383% ( 3) 00:11:53.720 21.577 - 21.699: 99.6770% ( 3) 00:11:53.720 21.699 - 21.821: 99.6900% ( 1) 00:11:53.720 22.309 - 22.430: 99.7029% ( 1) 00:11:53.720 22.430 - 22.552: 99.7158% ( 1) 00:11:53.721 23.040 - 23.162: 99.7287% ( 1) 00:11:53.721 23.650 - 23.771: 99.7546% ( 2) 00:11:53.721 24.503 - 24.625: 99.7675% ( 1) 00:11:53.721 24.990 - 25.112: 99.7804% ( 1) 00:11:53.721 25.234 - 25.356: 99.7933% ( 1) 00:11:53.721 25.478 - 25.600: 99.8062% ( 1) 00:11:53.721 25.600 - 25.722: 99.8321% ( 2) 00:11:53.721 26.088 - 26.210: 99.8450% ( 1) 00:11:53.721 26.697 - 26.819: 99.8579% ( 1) 00:11:53.721 26.819 - 26.941: 99.8708% ( 1) 00:11:53.721 28.282 - 28.404: 99.8967% ( 2) 00:11:53.721 30.476 - 30.598: 99.9096% ( 1) 00:11:53.721 31.208 - 31.451: 99.9225% ( 1) 00:11:53.721 39.010 - 39.253: 99.9354% ( 1) 00:11:53.721 42.179 - 42.423: 99.9483% ( 1) 00:11:53.721 43.886 - 44.130: 99.9612% ( 1) 00:11:53.721 45.349 - 45.592: 99.9742% ( 1) 00:11:53.721 45.592 - 45.836: 99.9871% ( 1) 00:11:53.721 101.912 - 102.400: 100.0000% ( 1) 00:11:53.721 00:11:53.721 00:11:53.721 real 0m1.342s 00:11:53.721 user 0m1.128s 00:11:53.721 sys 0m0.158s 00:11:53.721 ************************************ 00:11:53.721 END TEST nvme_overhead 00:11:53.721 ************************************ 00:11:53.721 05:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.721 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:11:53.721 05:09:12 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:53.721 05:09:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:53.721 05:09:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.721 05:09:12 -- common/autotest_common.sh@10 -- # set +x 00:11:53.721 ************************************ 00:11:53.721 START TEST nvme_arbitration 00:11:53.721 ************************************ 00:11:53.721 05:09:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:57.944 Initializing NVMe Controllers 00:11:57.944 Attached to 0000:00:06.0 00:11:57.944 Attached to 0000:00:07.0 00:11:57.944 Attached to 0000:00:09.0 00:11:57.944 Attached to 0000:00:08.0 00:11:57.944 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:57.944 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:57.944 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:57.944 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:57.944 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:57.944 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:57.944 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:57.944 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:57.944 Initialization complete. Launching workers. 00:11:57.944 Starting thread on core 1 with urgent priority queue 00:11:57.944 Starting thread on core 2 with urgent priority queue 00:11:57.944 Starting thread on core 0 with urgent priority queue 00:11:57.944 Starting thread on core 3 with urgent priority queue 00:11:57.944 QEMU NVMe Ctrl (12340 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:11:57.944 QEMU NVMe Ctrl (12342 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:11:57.944 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:57.944 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:11:57.944 QEMU NVMe Ctrl (12343 ) core 2: 405.33 IO/s 246.71 secs/100000 ios 00:11:57.944 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:11:57.944 ======================================================== 00:11:57.944 00:11:57.944 ************************************ 00:11:57.944 END TEST nvme_arbitration 00:11:57.944 ************************************ 00:11:57.944 00:11:57.944 real 0m3.640s 00:11:57.944 user 0m9.888s 00:11:57.944 sys 0m0.162s 00:11:57.944 05:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.944 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 05:09:16 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:57.944 05:09:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:57.944 05:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.944 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 ************************************ 00:11:57.944 START TEST nvme_single_aen 00:11:57.944 ************************************ 00:11:57.944 05:09:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:57.944 [2024-07-26 05:09:16.564317] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:57.945 [2024-07-26 05:09:16.564597] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.945 [2024-07-26 05:09:16.773482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:57.945 [2024-07-26 05:09:16.775448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:11:57.945 [2024-07-26 05:09:16.776665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:11:57.945 [2024-07-26 05:09:16.778169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:11:57.945 Asynchronous Event Request test 00:11:57.945 Attached to 0000:00:06.0 00:11:57.945 Attached to 0000:00:07.0 00:11:57.945 Attached to 0000:00:09.0 00:11:57.945 Attached to 0000:00:08.0 00:11:57.945 Reset controller to setup AER completions for this process 00:11:57.945 Registering asynchronous event callbacks... 00:11:57.945 Getting orig temperature thresholds of all controllers 00:11:57.945 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:57.945 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:57.945 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:57.945 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:57.945 Setting all controllers temperature threshold low to trigger AER 00:11:57.945 Waiting for all controllers temperature threshold to be set lower 00:11:57.945 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:57.945 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:57.945 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:57.945 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:11:57.945 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:57.945 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:11:57.945 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:57.945 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:11:57.945 Waiting for all controllers to trigger AER and reset threshold 00:11:57.945 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.945 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.945 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.945 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.945 Cleaning up... 00:11:57.945 00:11:57.945 real 0m0.316s 00:11:57.945 user 0m0.111s 00:11:57.945 sys 0m0.164s 00:11:57.945 05:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.945 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.945 ************************************ 00:11:57.945 END TEST nvme_single_aen 00:11:57.945 ************************************ 00:11:57.945 05:09:16 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:57.945 05:09:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:57.945 05:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.945 05:09:16 -- common/autotest_common.sh@10 -- # set +x 00:11:57.945 ************************************ 00:11:57.945 START TEST nvme_doorbell_aers 00:11:57.945 ************************************ 00:11:57.945 05:09:16 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:57.945 05:09:16 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:57.945 05:09:16 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:57.945 05:09:16 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:57.945 05:09:16 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:57.945 05:09:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:57.945 05:09:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:57.945 05:09:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:57.945 05:09:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:57.945 05:09:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:57.945 05:09:16 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:57.945 05:09:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:57.945 05:09:16 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:57.945 05:09:16 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:58.203 [2024-07-26 05:09:17.283062] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:08.172 Executing: test_write_invalid_db 00:12:08.172 Waiting for AER completion... 00:12:08.172 Failure: test_write_invalid_db 00:12:08.172 00:12:08.172 Executing: test_invalid_db_write_overflow_sq 00:12:08.172 Waiting for AER completion... 00:12:08.172 Failure: test_invalid_db_write_overflow_sq 00:12:08.172 00:12:08.172 Executing: test_invalid_db_write_overflow_cq 00:12:08.172 Waiting for AER completion... 00:12:08.172 Failure: test_invalid_db_write_overflow_cq 00:12:08.172 00:12:08.172 05:09:27 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:08.172 05:09:27 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:12:08.431 [2024-07-26 05:09:27.349419] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:18.479 Executing: test_write_invalid_db 00:12:18.479 Waiting for AER completion... 00:12:18.479 Failure: test_write_invalid_db 00:12:18.479 00:12:18.479 Executing: test_invalid_db_write_overflow_sq 00:12:18.479 Waiting for AER completion... 00:12:18.479 Failure: test_invalid_db_write_overflow_sq 00:12:18.479 00:12:18.479 Executing: test_invalid_db_write_overflow_cq 00:12:18.479 Waiting for AER completion... 00:12:18.479 Failure: test_invalid_db_write_overflow_cq 00:12:18.479 00:12:18.479 05:09:37 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:18.479 05:09:37 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:12:18.479 [2024-07-26 05:09:37.382392] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:28.451 Executing: test_write_invalid_db 00:12:28.451 Waiting for AER completion... 00:12:28.452 Failure: test_write_invalid_db 00:12:28.452 00:12:28.452 Executing: test_invalid_db_write_overflow_sq 00:12:28.452 Waiting for AER completion... 00:12:28.452 Failure: test_invalid_db_write_overflow_sq 00:12:28.452 00:12:28.452 Executing: test_invalid_db_write_overflow_cq 00:12:28.452 Waiting for AER completion... 00:12:28.452 Failure: test_invalid_db_write_overflow_cq 00:12:28.452 00:12:28.452 05:09:47 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:28.452 05:09:47 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:12:28.452 [2024-07-26 05:09:47.430190] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 Executing: test_write_invalid_db 00:12:38.423 Waiting for AER completion... 00:12:38.423 Failure: test_write_invalid_db 00:12:38.423 00:12:38.423 Executing: test_invalid_db_write_overflow_sq 00:12:38.423 Waiting for AER completion... 00:12:38.423 Failure: test_invalid_db_write_overflow_sq 00:12:38.423 00:12:38.423 Executing: test_invalid_db_write_overflow_cq 00:12:38.423 Waiting for AER completion... 00:12:38.423 Failure: test_invalid_db_write_overflow_cq 00:12:38.423 00:12:38.423 00:12:38.423 real 0m40.281s 00:12:38.423 user 0m30.203s 00:12:38.423 sys 0m9.689s 00:12:38.423 05:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.423 ************************************ 00:12:38.423 END TEST nvme_doorbell_aers 00:12:38.423 ************************************ 00:12:38.423 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 05:09:57 -- nvme/nvme.sh@97 -- # uname 00:12:38.423 05:09:57 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:38.423 05:09:57 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:38.423 05:09:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:12:38.423 05:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.423 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:38.423 ************************************ 00:12:38.423 START TEST nvme_multi_aen 00:12:38.423 ************************************ 00:12:38.423 05:09:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:12:38.423 [2024-07-26 05:09:57.292286] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:38.423 [2024-07-26 05:09:57.292401] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.423 [2024-07-26 05:09:57.509112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:38.423 [2024-07-26 05:09:57.509192] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.509244] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.509263] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.510818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:12:38.423 [2024-07-26 05:09:57.510849] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.510876] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.510894] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.512167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:12:38.423 [2024-07-26 05:09:57.512192] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.512225] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.512259] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.513563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:12:38.423 [2024-07-26 05:09:57.513589] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.513613] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 [2024-07-26 05:09:57.513631] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65031) is not found. Dropping the request. 00:12:38.423 Child process pid: 65541 00:12:38.423 [2024-07-26 05:09:57.526231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:38.423 [2024-07-26 05:09:57.526443] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.991 [Child] Asynchronous Event Request test 00:12:38.991 [Child] Attached to 0000:00:06.0 00:12:38.991 [Child] Attached to 0000:00:07.0 00:12:38.991 [Child] Attached to 0000:00:09.0 00:12:38.991 [Child] Attached to 0000:00:08.0 00:12:38.991 [Child] Registering asynchronous event callbacks... 00:12:38.991 [Child] Getting orig temperature thresholds of all controllers 00:12:38.991 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.991 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.991 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.991 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.991 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:38.991 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.991 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.991 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.991 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.991 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.991 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.991 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.991 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.991 [Child] Cleaning up... 00:12:38.992 Asynchronous Event Request test 00:12:38.992 Attached to 0000:00:06.0 00:12:38.992 Attached to 0000:00:07.0 00:12:38.992 Attached to 0000:00:09.0 00:12:38.992 Attached to 0000:00:08.0 00:12:38.992 Reset controller to setup AER completions for this process 00:12:38.992 Registering asynchronous event callbacks... 00:12:38.992 Getting orig temperature thresholds of all controllers 00:12:38.992 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.992 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.992 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.992 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:38.992 Setting all controllers temperature threshold low to trigger AER 00:12:38.992 Waiting for all controllers temperature threshold to be set lower 00:12:38.992 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.992 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:12:38.992 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.992 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:12:38.992 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.992 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:12:38.992 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:38.992 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:12:38.992 Waiting for all controllers to trigger AER and reset threshold 00:12:38.992 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.992 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.992 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.992 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:38.992 Cleaning up... 00:12:38.992 00:12:38.992 real 0m0.663s 00:12:38.992 user 0m0.221s 00:12:38.992 sys 0m0.330s 00:12:38.992 05:09:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.992 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:38.992 ************************************ 00:12:38.992 END TEST nvme_multi_aen 00:12:38.992 ************************************ 00:12:38.992 05:09:57 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:38.992 05:09:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:38.992 05:09:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.992 05:09:57 -- common/autotest_common.sh@10 -- # set +x 00:12:38.992 ************************************ 00:12:38.992 START TEST nvme_startup 00:12:38.992 ************************************ 00:12:38.992 05:09:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:39.251 Initializing NVMe Controllers 00:12:39.251 Attached to 0000:00:06.0 00:12:39.251 Attached to 0000:00:07.0 00:12:39.251 Attached to 0000:00:09.0 00:12:39.251 Attached to 0000:00:08.0 00:12:39.251 Initialization complete. 00:12:39.251 Time used:216321.016 (us). 00:12:39.251 00:12:39.251 real 0m0.317s 00:12:39.251 user 0m0.104s 00:12:39.251 sys 0m0.167s 00:12:39.251 05:09:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.251 05:09:58 -- common/autotest_common.sh@10 -- # set +x 00:12:39.251 ************************************ 00:12:39.251 END TEST nvme_startup 00:12:39.251 ************************************ 00:12:39.251 05:09:58 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:39.251 05:09:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:39.251 05:09:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.251 05:09:58 -- common/autotest_common.sh@10 -- # set +x 00:12:39.251 ************************************ 00:12:39.251 START TEST nvme_multi_secondary 00:12:39.251 ************************************ 00:12:39.251 05:09:58 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:12:39.251 05:09:58 -- nvme/nvme.sh@52 -- # pid0=65596 00:12:39.251 05:09:58 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:39.251 05:09:58 -- nvme/nvme.sh@54 -- # pid1=65597 00:12:39.251 05:09:58 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:39.251 05:09:58 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:43.488 Initializing NVMe Controllers 00:12:43.488 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:43.488 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:43.488 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:43.488 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:43.488 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:43.488 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:43.488 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:43.488 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:43.488 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:43.488 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:43.488 Initialization complete. Launching workers. 00:12:43.488 ======================================================== 00:12:43.488 Latency(us) 00:12:43.488 Device Information : IOPS MiB/s Average min max 00:12:43.488 PCIE (0000:00:06.0) NSID 1 from core 1: 5386.49 21.04 2970.87 1335.69 8033.75 00:12:43.489 PCIE (0000:00:07.0) NSID 1 from core 1: 5386.49 21.04 2972.44 1316.77 8535.22 00:12:43.489 PCIE (0000:00:09.0) NSID 1 from core 1: 5386.49 21.04 2972.11 1430.00 8894.65 00:12:43.489 PCIE (0000:00:08.0) NSID 1 from core 1: 5386.49 21.04 2972.06 1530.57 8392.47 00:12:43.489 PCIE (0000:00:08.0) NSID 2 from core 1: 5386.49 21.04 2986.34 1547.35 21867.72 00:12:43.489 PCIE (0000:00:08.0) NSID 3 from core 1: 5386.49 21.04 2988.27 1514.68 21329.93 00:12:43.489 ======================================================== 00:12:43.489 Total : 32318.97 126.25 2977.01 1316.77 21867.72 00:12:43.489 00:12:43.489 Initializing NVMe Controllers 00:12:43.489 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:43.489 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:43.489 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:43.489 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:43.489 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:43.489 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:43.489 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:43.489 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:43.489 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:43.489 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:43.489 Initialization complete. Launching workers. 00:12:43.489 ======================================================== 00:12:43.489 Latency(us) 00:12:43.489 Device Information : IOPS MiB/s Average min max 00:12:43.489 PCIE (0000:00:06.0) NSID 1 from core 2: 2297.30 8.97 6962.79 2006.44 18415.71 00:12:43.489 PCIE (0000:00:07.0) NSID 1 from core 2: 2297.30 8.97 6964.93 2080.77 22751.78 00:12:43.489 PCIE (0000:00:09.0) NSID 1 from core 2: 2297.30 8.97 6965.34 2073.20 23174.10 00:12:43.489 PCIE (0000:00:08.0) NSID 1 from core 2: 2297.30 8.97 6964.06 2140.87 23761.92 00:12:43.489 PCIE (0000:00:08.0) NSID 2 from core 2: 2297.30 8.97 6953.50 2022.27 26729.17 00:12:43.489 PCIE (0000:00:08.0) NSID 3 from core 2: 2297.30 8.97 6931.08 2042.53 17510.48 00:12:43.489 ======================================================== 00:12:43.489 Total : 13783.77 53.84 6956.95 2006.44 26729.17 00:12:43.489 00:12:43.489 05:10:02 -- nvme/nvme.sh@56 -- # wait 65596 00:12:44.865 Initializing NVMe Controllers 00:12:44.865 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:44.865 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:44.865 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:44.865 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:44.865 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:44.865 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:44.865 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:44.865 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:44.865 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:44.865 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:44.865 Initialization complete. Launching workers. 00:12:44.865 ======================================================== 00:12:44.865 Latency(us) 00:12:44.865 Device Information : IOPS MiB/s Average min max 00:12:44.865 PCIE (0000:00:06.0) NSID 1 from core 0: 8319.04 32.50 1921.92 935.96 8754.08 00:12:44.865 PCIE (0000:00:07.0) NSID 1 from core 0: 8319.04 32.50 1922.90 960.96 8588.89 00:12:44.865 PCIE (0000:00:09.0) NSID 1 from core 0: 8319.04 32.50 1922.88 939.15 7762.71 00:12:44.865 PCIE (0000:00:08.0) NSID 1 from core 0: 8319.04 32.50 1922.87 916.21 7654.78 00:12:44.865 PCIE (0000:00:08.0) NSID 2 from core 0: 8319.04 32.50 1922.85 879.55 8112.67 00:12:44.865 PCIE (0000:00:08.0) NSID 3 from core 0: 8322.24 32.51 1922.09 847.90 7898.89 00:12:44.866 ======================================================== 00:12:44.866 Total : 49917.47 194.99 1922.58 847.90 8754.08 00:12:44.866 00:12:44.866 05:10:03 -- nvme/nvme.sh@57 -- # wait 65597 00:12:44.866 05:10:03 -- nvme/nvme.sh@61 -- # pid0=65668 00:12:44.866 05:10:03 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:44.866 05:10:03 -- nvme/nvme.sh@63 -- # pid1=65669 00:12:44.866 05:10:03 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:44.866 05:10:03 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:48.151 Initializing NVMe Controllers 00:12:48.151 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:48.151 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:48.151 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:48.151 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:48.151 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:48.151 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:12:48.151 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:12:48.151 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:12:48.151 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:12:48.151 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:12:48.151 Initialization complete. Launching workers. 00:12:48.151 ======================================================== 00:12:48.151 Latency(us) 00:12:48.151 Device Information : IOPS MiB/s Average min max 00:12:48.151 PCIE (0000:00:06.0) NSID 1 from core 0: 6116.48 23.89 2614.39 1030.46 5365.88 00:12:48.151 PCIE (0000:00:07.0) NSID 1 from core 0: 6116.48 23.89 2615.56 1067.07 6369.32 00:12:48.151 PCIE (0000:00:09.0) NSID 1 from core 0: 6116.48 23.89 2615.67 1046.37 5823.97 00:12:48.151 PCIE (0000:00:08.0) NSID 1 from core 0: 6116.48 23.89 2615.67 1062.80 5761.36 00:12:48.151 PCIE (0000:00:08.0) NSID 2 from core 0: 6116.48 23.89 2615.66 1059.57 5644.34 00:12:48.151 PCIE (0000:00:08.0) NSID 3 from core 0: 6121.81 23.91 2613.50 1060.94 5506.74 00:12:48.151 ======================================================== 00:12:48.151 Total : 36704.20 143.38 2615.07 1030.46 6369.32 00:12:48.151 00:12:48.409 Initializing NVMe Controllers 00:12:48.409 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:48.409 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:48.409 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:48.409 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:48.409 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:12:48.409 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:12:48.409 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:12:48.409 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:12:48.409 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:12:48.409 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:12:48.409 Initialization complete. Launching workers. 00:12:48.409 ======================================================== 00:12:48.409 Latency(us) 00:12:48.409 Device Information : IOPS MiB/s Average min max 00:12:48.409 PCIE (0000:00:06.0) NSID 1 from core 1: 5481.40 21.41 2917.30 1032.73 5808.99 00:12:48.409 PCIE (0000:00:07.0) NSID 1 from core 1: 5481.40 21.41 2918.26 1051.80 5888.91 00:12:48.409 PCIE (0000:00:09.0) NSID 1 from core 1: 5481.40 21.41 2918.10 1059.01 5012.06 00:12:48.409 PCIE (0000:00:08.0) NSID 1 from core 1: 5481.40 21.41 2917.95 1069.38 5173.02 00:12:48.409 PCIE (0000:00:08.0) NSID 2 from core 1: 5481.40 21.41 2917.77 1065.56 5340.54 00:12:48.409 PCIE (0000:00:08.0) NSID 3 from core 1: 5481.40 21.41 2917.62 1064.08 5590.97 00:12:48.409 ======================================================== 00:12:48.409 Total : 32888.38 128.47 2917.83 1032.73 5888.91 00:12:48.409 00:12:50.938 Initializing NVMe Controllers 00:12:50.938 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:50.938 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:12:50.938 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:12:50.938 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:12:50.938 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:12:50.938 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:12:50.938 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:12:50.938 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:12:50.938 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:12:50.938 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:12:50.938 Initialization complete. Launching workers. 00:12:50.938 ======================================================== 00:12:50.938 Latency(us) 00:12:50.938 Device Information : IOPS MiB/s Average min max 00:12:50.938 PCIE (0000:00:06.0) NSID 1 from core 2: 3462.93 13.53 4618.87 1102.62 14400.33 00:12:50.938 PCIE (0000:00:07.0) NSID 1 from core 2: 3462.93 13.53 4619.89 1118.11 16256.05 00:12:50.938 PCIE (0000:00:09.0) NSID 1 from core 2: 3462.93 13.53 4619.51 1095.47 12848.81 00:12:50.938 PCIE (0000:00:08.0) NSID 1 from core 2: 3462.93 13.53 4619.37 1092.71 12773.47 00:12:50.938 PCIE (0000:00:08.0) NSID 2 from core 2: 3462.93 13.53 4619.49 880.01 13460.80 00:12:50.938 PCIE (0000:00:08.0) NSID 3 from core 2: 3462.93 13.53 4619.16 879.59 13166.33 00:12:50.938 ======================================================== 00:12:50.938 Total : 20777.61 81.16 4619.38 879.59 16256.05 00:12:50.938 00:12:50.938 ************************************ 00:12:50.938 END TEST nvme_multi_secondary 00:12:50.938 ************************************ 00:12:50.938 05:10:09 -- nvme/nvme.sh@65 -- # wait 65668 00:12:50.938 05:10:09 -- nvme/nvme.sh@66 -- # wait 65669 00:12:50.938 00:12:50.938 real 0m11.137s 00:12:50.938 user 0m19.402s 00:12:50.938 sys 0m1.063s 00:12:50.938 05:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.938 05:10:09 -- common/autotest_common.sh@10 -- # set +x 00:12:50.938 05:10:09 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:50.938 05:10:09 -- nvme/nvme.sh@102 -- # kill_stub 00:12:50.938 05:10:09 -- common/autotest_common.sh@1065 -- # [[ -e /proc/64594 ]] 00:12:50.938 05:10:09 -- common/autotest_common.sh@1066 -- # kill 64594 00:12:50.938 05:10:09 -- common/autotest_common.sh@1067 -- # wait 64594 00:12:51.196 [2024-07-26 05:10:10.144294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.196 [2024-07-26 05:10:10.144393] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.196 [2024-07-26 05:10:10.144418] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.196 [2024-07-26 05:10:10.144463] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.763 [2024-07-26 05:10:10.668431] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.763 [2024-07-26 05:10:10.668528] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.763 [2024-07-26 05:10:10.668554] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:51.763 [2024-07-26 05:10:10.668580] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:52.697 [2024-07-26 05:10:11.688560] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:52.697 [2024-07-26 05:10:11.688652] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:52.697 [2024-07-26 05:10:11.688677] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:52.697 [2024-07-26 05:10:11.688704] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:54.601 [2024-07-26 05:10:13.207104] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:54.601 [2024-07-26 05:10:13.207202] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:54.601 [2024-07-26 05:10:13.207247] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:54.601 [2024-07-26 05:10:13.207281] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65539) is not found. Dropping the request. 00:12:54.601 05:10:13 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:12:54.601 05:10:13 -- common/autotest_common.sh@1073 -- # echo 2 00:12:54.601 05:10:13 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:54.601 05:10:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:54.601 05:10:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.601 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.601 ************************************ 00:12:54.601 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:54.601 ************************************ 00:12:54.601 05:10:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:54.601 * Looking for test storage... 00:12:54.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:54.601 05:10:13 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:54.601 05:10:13 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:54.601 05:10:13 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:54.601 05:10:13 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:54.601 05:10:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:54.601 05:10:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:54.601 05:10:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:54.601 05:10:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:54.601 05:10:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:54.601 05:10:13 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:54.601 05:10:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:12:54.601 05:10:13 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65856 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:54.601 05:10:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65856 00:12:54.601 05:10:13 -- common/autotest_common.sh@819 -- # '[' -z 65856 ']' 00:12:54.601 05:10:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.601 05:10:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:54.601 05:10:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.601 05:10:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:54.601 05:10:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.859 [2024-07-26 05:10:13.789019] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:54.859 [2024-07-26 05:10:13.789398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65856 ] 00:12:55.117 [2024-07-26 05:10:13.989640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.375 [2024-07-26 05:10:14.237983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.375 [2024-07-26 05:10:14.238546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.375 [2024-07-26 05:10:14.238634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.375 [2024-07-26 05:10:14.238679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.375 [2024-07-26 05:10:14.238702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.750 05:10:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.750 05:10:15 -- common/autotest_common.sh@852 -- # return 0 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:12:56.750 05:10:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.750 05:10:15 -- common/autotest_common.sh@10 -- # set +x 00:12:56.750 nvme0n1 00:12:56.750 05:10:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ejrpb.txt 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:56.750 05:10:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:56.750 05:10:15 -- common/autotest_common.sh@10 -- # set +x 00:12:56.750 true 00:12:56.750 05:10:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721970615 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65892 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:56.750 05:10:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:58.653 05:10:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.653 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 [2024-07-26 05:10:17.571947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:58.653 [2024-07-26 05:10:17.572302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:58.653 [2024-07-26 05:10:17.572331] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:58.653 [2024-07-26 05:10:17.572349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.653 [2024-07-26 05:10:17.574382] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:58.653 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65892 00:12:58.653 05:10:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65892 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65892 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:58.653 05:10:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.653 05:10:17 -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 05:10:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ejrpb.txt 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:58.653 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ejrpb.txt 00:12:58.654 05:10:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65856 00:12:58.654 05:10:17 -- common/autotest_common.sh@926 -- # '[' -z 65856 ']' 00:12:58.654 05:10:17 -- common/autotest_common.sh@930 -- # kill -0 65856 00:12:58.654 05:10:17 -- common/autotest_common.sh@931 -- # uname 00:12:58.654 05:10:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.654 05:10:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65856 00:12:58.654 killing process with pid 65856 00:12:58.654 05:10:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:58.654 05:10:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:58.654 05:10:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65856' 00:12:58.654 05:10:17 -- common/autotest_common.sh@945 -- # kill 65856 00:12:58.654 05:10:17 -- common/autotest_common.sh@950 -- # wait 65856 00:13:01.187 05:10:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:01.187 05:10:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:01.187 00:13:01.187 real 0m6.681s 00:13:01.187 user 0m23.382s 00:13:01.187 sys 0m0.757s 00:13:01.187 ************************************ 00:13:01.187 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:01.187 ************************************ 00:13:01.187 05:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.187 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.187 05:10:20 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:01.187 05:10:20 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:01.187 05:10:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:01.187 05:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:01.187 05:10:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.187 ************************************ 00:13:01.187 START TEST nvme_fio 00:13:01.187 ************************************ 00:13:01.187 05:10:20 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:13:01.187 05:10:20 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:01.187 05:10:20 -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:01.187 05:10:20 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:01.187 05:10:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:01.187 05:10:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:01.187 05:10:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:01.187 05:10:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:01.187 05:10:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:01.446 05:10:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:01.447 05:10:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:01.447 05:10:20 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:13:01.447 05:10:20 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:01.447 05:10:20 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:01.447 05:10:20 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:01.447 05:10:20 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:01.706 05:10:20 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:01.706 05:10:20 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:01.965 05:10:20 -- nvme/nvme.sh@41 -- # bs=4096 00:13:01.965 05:10:20 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:01.965 05:10:20 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:01.965 05:10:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:01.965 05:10:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:01.965 05:10:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:01.965 05:10:20 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:01.965 05:10:20 -- common/autotest_common.sh@1320 -- # shift 00:13:01.965 05:10:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:01.965 05:10:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.965 05:10:20 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:01.965 05:10:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:01.965 05:10:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:01.965 05:10:20 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:01.965 05:10:20 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:01.965 05:10:20 -- common/autotest_common.sh@1326 -- # break 00:13:01.965 05:10:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:01.965 05:10:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:01.965 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:01.965 fio-3.35 00:13:01.965 Starting 1 thread 00:13:05.260 00:13:05.260 test: (groupid=0, jobs=1): err= 0: pid=66045: Fri Jul 26 05:10:24 2024 00:13:05.260 read: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec) 00:13:05.260 slat (usec): min=3, max=344, avg= 5.54, stdev= 2.47 00:13:05.260 clat (usec): min=213, max=9306, avg=3392.37, stdev=547.13 00:13:05.260 lat (usec): min=218, max=9310, avg=3397.91, stdev=547.80 00:13:05.260 clat percentiles (usec): 00:13:05.260 | 1.00th=[ 2540], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3097], 00:13:05.260 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3261], 00:13:05.261 | 70.00th=[ 3359], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4113], 00:13:05.261 | 99.00th=[ 5473], 99.50th=[ 6849], 99.90th=[ 8029], 99.95th=[ 8717], 00:13:05.261 | 99.99th=[ 8979] 00:13:05.261 bw ( KiB/s): min=64200, max=80328, per=98.20%, avg=73698.67, stdev=8438.18, samples=3 00:13:05.261 iops : min=16050, max=20082, avg=18424.67, stdev=2109.55, samples=3 00:13:05.261 write: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec); 0 zone resets 00:13:05.261 slat (usec): min=3, max=506, avg= 5.65, stdev= 3.88 00:13:05.261 clat (usec): min=248, max=8939, avg=3400.75, stdev=558.57 00:13:05.261 lat (usec): min=252, max=9238, avg=3406.39, stdev=559.49 00:13:05.261 clat percentiles (usec): 00:13:05.261 | 1.00th=[ 2540], 5.00th=[ 2999], 10.00th=[ 3032], 20.00th=[ 3097], 00:13:05.261 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3261], 00:13:05.261 | 70.00th=[ 3392], 80.00th=[ 3851], 90.00th=[ 4015], 95.00th=[ 4146], 00:13:05.261 | 99.00th=[ 5604], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8586], 00:13:05.261 | 99.99th=[ 8717] 00:13:05.261 bw ( KiB/s): min=64528, max=79952, per=98.06%, avg=73624.00, stdev=8075.97, samples=3 00:13:05.261 iops : min=16132, max=19988, avg=18406.00, stdev=2018.99, samples=3 00:13:05.261 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:13:05.261 lat (msec) : 2=0.27%, 4=89.20%, 10=10.48% 00:13:05.261 cpu : usr=98.70%, sys=0.25%, ctx=22, majf=0, minf=608 00:13:05.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:05.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:05.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:05.261 issued rwts: total=37545,37560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:05.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:05.261 00:13:05.261 Run status group 0 (all jobs): 00:13:05.261 READ: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:13:05.261 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:13:05.548 ----------------------------------------------------- 00:13:05.548 Suppressions used: 00:13:05.548 count bytes template 00:13:05.548 1 32 /usr/src/fio/parse.c 00:13:05.548 1 8 libtcmalloc_minimal.so 00:13:05.548 ----------------------------------------------------- 00:13:05.548 00:13:05.548 05:10:24 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:05.548 05:10:24 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:05.548 05:10:24 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:05.548 05:10:24 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:05.806 05:10:24 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:13:05.806 05:10:24 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:06.065 05:10:25 -- nvme/nvme.sh@41 -- # bs=4096 00:13:06.065 05:10:25 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:06.065 05:10:25 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:06.065 05:10:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:06.065 05:10:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:06.065 05:10:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:06.065 05:10:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:06.065 05:10:25 -- common/autotest_common.sh@1320 -- # shift 00:13:06.065 05:10:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:06.065 05:10:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:06.065 05:10:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:06.065 05:10:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:06.065 05:10:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:06.065 05:10:25 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:06.065 05:10:25 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:06.065 05:10:25 -- common/autotest_common.sh@1326 -- # break 00:13:06.065 05:10:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:06.065 05:10:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:13:06.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:06.324 fio-3.35 00:13:06.324 Starting 1 thread 00:13:09.609 00:13:09.610 test: (groupid=0, jobs=1): err= 0: pid=66110: Fri Jul 26 05:10:28 2024 00:13:09.610 read: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(156MiB/2001msec) 00:13:09.610 slat (nsec): min=4343, max=73744, avg=5293.71, stdev=1395.45 00:13:09.610 clat (usec): min=217, max=8710, avg=3200.74, stdev=505.34 00:13:09.610 lat (usec): min=221, max=8776, avg=3206.03, stdev=506.17 00:13:09.610 clat percentiles (usec): 00:13:09.610 | 1.00th=[ 2802], 5.00th=[ 2900], 10.00th=[ 2933], 20.00th=[ 2966], 00:13:09.610 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:13:09.610 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3916], 95.00th=[ 4080], 00:13:09.610 | 99.00th=[ 5145], 99.50th=[ 5145], 99.90th=[ 8029], 99.95th=[ 8586], 00:13:09.610 | 99.99th=[ 8717] 00:13:09.610 bw ( KiB/s): min=72808, max=84616, per=100.00%, avg=80602.67, stdev=6751.38, samples=3 00:13:09.610 iops : min=18202, max=21154, avg=20150.67, stdev=1687.84, samples=3 00:13:09.610 write: IOPS=19.9k, BW=77.6MiB/s (81.3MB/s)(155MiB/2001msec); 0 zone resets 00:13:09.610 slat (usec): min=4, max=586, avg= 5.44, stdev= 3.20 00:13:09.610 clat (usec): min=175, max=8807, avg=3205.81, stdev=511.21 00:13:09.610 lat (usec): min=181, max=8812, avg=3211.26, stdev=512.01 00:13:09.610 clat percentiles (usec): 00:13:09.610 | 1.00th=[ 2802], 5.00th=[ 2900], 10.00th=[ 2933], 20.00th=[ 2966], 00:13:09.610 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:13:09.610 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3916], 95.00th=[ 4080], 00:13:09.610 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 8160], 99.95th=[ 8586], 00:13:09.610 | 99.99th=[ 8717] 00:13:09.610 bw ( KiB/s): min=72888, max=84800, per=100.00%, avg=80674.67, stdev=6747.44, samples=3 00:13:09.610 iops : min=18222, max=21200, avg=20168.67, stdev=1686.86, samples=3 00:13:09.610 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:13:09.610 lat (msec) : 2=0.13%, 4=92.98%, 10=6.85% 00:13:09.610 cpu : usr=99.35%, sys=0.05%, ctx=5, majf=0, minf=607 00:13:09.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:09.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.610 issued rwts: total=39822,39741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.610 00:13:09.610 Run status group 0 (all jobs): 00:13:09.610 READ: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:13:09.610 WRITE: bw=77.6MiB/s (81.3MB/s), 77.6MiB/s-77.6MiB/s (81.3MB/s-81.3MB/s), io=155MiB (163MB), run=2001-2001msec 00:13:09.868 ----------------------------------------------------- 00:13:09.868 Suppressions used: 00:13:09.868 count bytes template 00:13:09.868 1 32 /usr/src/fio/parse.c 00:13:09.868 1 8 libtcmalloc_minimal.so 00:13:09.868 ----------------------------------------------------- 00:13:09.868 00:13:09.868 05:10:28 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:09.868 05:10:28 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:09.868 05:10:28 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:09.868 05:10:28 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:10.126 05:10:29 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:13:10.126 05:10:29 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:10.384 05:10:29 -- nvme/nvme.sh@41 -- # bs=4096 00:13:10.384 05:10:29 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:10.384 05:10:29 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:10.384 05:10:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:10.384 05:10:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.384 05:10:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:10.384 05:10:29 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.384 05:10:29 -- common/autotest_common.sh@1320 -- # shift 00:13:10.384 05:10:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:10.384 05:10:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.384 05:10:29 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.384 05:10:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:10.384 05:10:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:10.384 05:10:29 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.384 05:10:29 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.384 05:10:29 -- common/autotest_common.sh@1326 -- # break 00:13:10.384 05:10:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:10.384 05:10:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:13:10.668 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:10.668 fio-3.35 00:13:10.668 Starting 1 thread 00:13:13.952 00:13:13.952 test: (groupid=0, jobs=1): err= 0: pid=66176: Fri Jul 26 05:10:32 2024 00:13:13.952 read: IOPS=19.4k, BW=75.7MiB/s (79.3MB/s)(151MiB/2001msec) 00:13:13.952 slat (nsec): min=3898, max=61509, avg=5345.78, stdev=1410.53 00:13:13.952 clat (usec): min=660, max=9119, avg=3286.77, stdev=462.53 00:13:13.952 lat (usec): min=665, max=9180, avg=3292.12, stdev=463.21 00:13:13.952 clat percentiles (usec): 00:13:13.952 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:13:13.952 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3163], 60.00th=[ 3195], 00:13:13.952 | 70.00th=[ 3294], 80.00th=[ 3556], 90.00th=[ 3884], 95.00th=[ 4080], 00:13:13.952 | 99.00th=[ 4293], 99.50th=[ 5538], 99.90th=[ 7439], 99.95th=[ 7635], 00:13:13.952 | 99.99th=[ 8979] 00:13:13.952 bw ( KiB/s): min=64736, max=82096, per=98.30%, avg=76157.33, stdev=9893.79, samples=3 00:13:13.952 iops : min=16184, max=20524, avg=19039.33, stdev=2473.45, samples=3 00:13:13.952 write: IOPS=19.3k, BW=75.5MiB/s (79.2MB/s)(151MiB/2001msec); 0 zone resets 00:13:13.952 slat (usec): min=4, max=159, avg= 5.51, stdev= 1.60 00:13:13.952 clat (usec): min=697, max=8970, avg=3298.17, stdev=476.00 00:13:13.952 lat (usec): min=701, max=8984, avg=3303.68, stdev=476.62 00:13:13.952 clat percentiles (usec): 00:13:13.952 | 1.00th=[ 2409], 5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032], 00:13:13.952 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:13:13.952 | 70.00th=[ 3326], 80.00th=[ 3589], 90.00th=[ 3916], 95.00th=[ 4080], 00:13:13.952 | 99.00th=[ 4424], 99.50th=[ 5669], 99.90th=[ 7635], 99.95th=[ 7963], 00:13:13.952 | 99.99th=[ 8717] 00:13:13.952 bw ( KiB/s): min=65120, max=82000, per=98.68%, avg=76314.67, stdev=9695.27, samples=3 00:13:13.952 iops : min=16280, max=20500, avg=19078.67, stdev=2423.82, samples=3 00:13:13.952 lat (usec) : 750=0.01%, 1000=0.05% 00:13:13.952 lat (msec) : 2=0.49%, 4=92.11%, 10=7.34% 00:13:13.952 cpu : usr=99.35%, sys=0.05%, ctx=9, majf=0, minf=607 00:13:13.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:13.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:13.952 issued rwts: total=38756,38686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:13.952 00:13:13.952 Run status group 0 (all jobs): 00:13:13.952 READ: bw=75.7MiB/s (79.3MB/s), 75.7MiB/s-75.7MiB/s (79.3MB/s-79.3MB/s), io=151MiB (159MB), run=2001-2001msec 00:13:13.952 WRITE: bw=75.5MiB/s (79.2MB/s), 75.5MiB/s-75.5MiB/s (79.2MB/s-79.2MB/s), io=151MiB (158MB), run=2001-2001msec 00:13:13.952 ----------------------------------------------------- 00:13:13.952 Suppressions used: 00:13:13.952 count bytes template 00:13:13.952 1 32 /usr/src/fio/parse.c 00:13:13.952 1 8 libtcmalloc_minimal.so 00:13:13.952 ----------------------------------------------------- 00:13:13.952 00:13:13.952 05:10:32 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:13.952 05:10:32 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:13.952 05:10:32 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:13.952 05:10:32 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:14.212 05:10:33 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:13:14.212 05:10:33 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:14.471 05:10:33 -- nvme/nvme.sh@41 -- # bs=4096 00:13:14.471 05:10:33 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.471 05:10:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.471 05:10:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:14.471 05:10:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:14.471 05:10:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:14.471 05:10:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:14.471 05:10:33 -- common/autotest_common.sh@1320 -- # shift 00:13:14.471 05:10:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:14.471 05:10:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:14.471 05:10:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:14.471 05:10:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:14.471 05:10:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:14.471 05:10:33 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.471 05:10:33 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.471 05:10:33 -- common/autotest_common.sh@1326 -- # break 00:13:14.471 05:10:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:14.471 05:10:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:13:14.471 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:14.471 fio-3.35 00:13:14.471 Starting 1 thread 00:13:19.734 00:13:19.734 test: (groupid=0, jobs=1): err= 0: pid=66238: Fri Jul 26 05:10:38 2024 00:13:19.734 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:13:19.734 slat (usec): min=4, max=236, avg= 5.23, stdev= 2.48 00:13:19.734 clat (usec): min=253, max=7684, avg=3265.42, stdev=277.49 00:13:19.734 lat (usec): min=258, max=7749, avg=3270.65, stdev=277.92 00:13:19.734 clat percentiles (usec): 00:13:19.734 | 1.00th=[ 2933], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3097], 00:13:19.734 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3228], 00:13:19.734 | 70.00th=[ 3294], 80.00th=[ 3326], 90.00th=[ 3458], 95.00th=[ 3949], 00:13:19.734 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 5080], 99.95th=[ 6128], 00:13:19.734 | 99.99th=[ 7570] 00:13:19.734 bw ( KiB/s): min=72432, max=79936, per=99.17%, avg=77330.67, stdev=4245.24, samples=3 00:13:19.734 iops : min=18108, max=19984, avg=19332.67, stdev=1061.31, samples=3 00:13:19.734 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:13:19.734 slat (usec): min=4, max=452, avg= 5.40, stdev= 3.66 00:13:19.734 clat (usec): min=212, max=7611, avg=3275.62, stdev=283.54 00:13:19.734 lat (usec): min=216, max=7623, avg=3281.01, stdev=283.99 00:13:19.734 clat percentiles (usec): 00:13:19.734 | 1.00th=[ 2933], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3130], 00:13:19.734 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3228], 60.00th=[ 3261], 00:13:19.734 | 70.00th=[ 3294], 80.00th=[ 3326], 90.00th=[ 3490], 95.00th=[ 3982], 00:13:19.734 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 5407], 99.95th=[ 6783], 00:13:19.734 | 99.99th=[ 7504] 00:13:19.734 bw ( KiB/s): min=72504, max=80192, per=99.48%, avg=77448.00, stdev=4290.26, samples=3 00:13:19.734 iops : min=18126, max=20048, avg=19362.00, stdev=1072.57, samples=3 00:13:19.734 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:19.734 lat (msec) : 2=0.06%, 4=95.53%, 10=4.38% 00:13:19.734 cpu : usr=98.75%, sys=0.30%, ctx=4, majf=0, minf=606 00:13:19.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:19.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:19.734 issued rwts: total=39008,38944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.734 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:19.734 00:13:19.734 Run status group 0 (all jobs): 00:13:19.734 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:13:19.734 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (160MB), run=2001-2001msec 00:13:19.993 ----------------------------------------------------- 00:13:19.993 Suppressions used: 00:13:19.993 count bytes template 00:13:19.993 1 32 /usr/src/fio/parse.c 00:13:19.993 1 8 libtcmalloc_minimal.so 00:13:19.993 ----------------------------------------------------- 00:13:19.993 00:13:19.993 05:10:39 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:19.993 05:10:39 -- nvme/nvme.sh@46 -- # true 00:13:19.993 00:13:19.993 real 0m18.771s 00:13:19.993 user 0m14.086s 00:13:19.993 sys 0m5.017s 00:13:19.993 05:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.993 05:10:39 -- common/autotest_common.sh@10 -- # set +x 00:13:19.993 ************************************ 00:13:19.993 END TEST nvme_fio 00:13:19.993 ************************************ 00:13:19.993 ************************************ 00:13:19.993 END TEST nvme 00:13:19.993 ************************************ 00:13:19.993 00:13:19.993 real 1m38.482s 00:13:19.993 user 3m48.137s 00:13:19.993 sys 0m22.229s 00:13:19.993 05:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.993 05:10:39 -- common/autotest_common.sh@10 -- # set +x 00:13:19.993 05:10:39 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:19.993 05:10:39 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:19.993 05:10:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:19.993 05:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.993 05:10:39 -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 ************************************ 00:13:20.251 START TEST nvme_scc 00:13:20.251 ************************************ 00:13:20.251 05:10:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:20.251 * Looking for test storage... 00:13:20.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:20.251 05:10:39 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:20.251 05:10:39 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:20.251 05:10:39 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:20.251 05:10:39 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:20.251 05:10:39 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.251 05:10:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.251 05:10:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.251 05:10:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.251 05:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.251 05:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.251 05:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.252 05:10:39 -- paths/export.sh@5 -- # export PATH 00:13:20.252 05:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.252 05:10:39 -- nvme/functions.sh@10 -- # ctrls=() 00:13:20.252 05:10:39 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:20.252 05:10:39 -- nvme/functions.sh@11 -- # nvmes=() 00:13:20.252 05:10:39 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:20.252 05:10:39 -- nvme/functions.sh@12 -- # bdfs=() 00:13:20.252 05:10:39 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:20.252 05:10:39 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:20.252 05:10:39 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:20.252 05:10:39 -- nvme/functions.sh@14 -- # nvme_name= 00:13:20.252 05:10:39 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:20.252 05:10:39 -- nvme/nvme_scc.sh@12 -- # uname 00:13:20.252 05:10:39 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:20.252 05:10:39 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:20.252 05:10:39 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:20.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.819 Waiting for block devices as requested 00:13:21.077 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.077 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.335 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.335 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.608 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:26.608 05:10:45 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:26.608 05:10:45 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:26.608 05:10:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.608 05:10:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:26.608 05:10:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:26.608 05:10:45 -- scripts/common.sh@15 -- # local i 00:13:26.608 05:10:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:26.608 05:10:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.608 05:10:45 -- scripts/common.sh@24 -- # return 0 00:13:26.608 05:10:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:26.608 05:10:45 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:26.608 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.608 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.608 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:26.608 05:10:45 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:26.608 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.609 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:26.609 05:10:45 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:26.609 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.610 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:26.610 05:10:45 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.610 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:26.611 05:10:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:26.611 05:10:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:26.611 05:10:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:26.611 05:10:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:26.611 05:10:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.611 05:10:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:26.611 05:10:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:26.611 05:10:45 -- scripts/common.sh@15 -- # local i 00:13:26.611 05:10:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:26.611 05:10:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.611 05:10:45 -- scripts/common.sh@24 -- # return 0 00:13:26.611 05:10:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:26.611 05:10:45 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:26.611 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.611 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:26.611 05:10:45 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.611 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.611 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.612 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:26.612 05:10:45 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.612 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.613 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.613 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:26.613 05:10:45 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:26.614 05:10:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.614 05:10:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:26.614 05:10:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:26.614 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.614 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.614 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:26.614 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.614 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:26.615 05:10:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.615 05:10:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:26.615 05:10:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:26.615 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.615 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.615 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:26.615 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:26.615 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.616 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.616 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.616 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:26.617 05:10:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.617 05:10:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:26.617 05:10:45 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:26.617 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.617 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.617 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.617 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:26.617 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.618 05:10:45 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:26.618 05:10:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:26.618 05:10:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:26.618 05:10:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:26.618 05:10:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:26.618 05:10:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.618 05:10:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:26.618 05:10:45 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:26.618 05:10:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:26.618 05:10:45 -- scripts/common.sh@15 -- # local i 00:13:26.618 05:10:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:26.618 05:10:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.618 05:10:45 -- scripts/common.sh@24 -- # return 0 00:13:26.618 05:10:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:26.618 05:10:45 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:26.618 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:26.618 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.618 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.618 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.619 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:26.619 05:10:45 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:26.619 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.620 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.620 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:26.620 05:10:45 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:26.621 05:10:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.621 05:10:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:26.621 05:10:45 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:26.621 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.621 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:26.621 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.621 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.621 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:26.622 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.622 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.622 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:26.623 05:10:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:26.623 05:10:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:26.623 05:10:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:26.623 05:10:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:26.623 05:10:45 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:26.623 05:10:45 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:26.623 05:10:45 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:26.623 05:10:45 -- scripts/common.sh@15 -- # local i 00:13:26.623 05:10:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:26.623 05:10:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:26.623 05:10:45 -- scripts/common.sh@24 -- # return 0 00:13:26.623 05:10:45 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:26.623 05:10:45 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:26.623 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.623 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.623 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:26.623 05:10:45 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.623 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:26.884 05:10:45 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.884 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.884 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.885 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.885 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.885 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:26.886 05:10:45 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:26.886 05:10:45 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:26.886 05:10:45 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:26.886 05:10:45 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:26.886 05:10:45 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@18 -- # shift 00:13:26.886 05:10:45 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.886 05:10:45 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:26.886 05:10:45 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:26.886 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:26.887 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.887 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.887 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:26.888 05:10:45 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # IFS=: 00:13:26.888 05:10:45 -- nvme/functions.sh@21 -- # read -r reg val 00:13:26.888 05:10:45 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:26.888 05:10:45 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:26.888 05:10:45 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:26.888 05:10:45 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:26.888 05:10:45 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:26.888 05:10:45 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:26.888 05:10:45 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:26.888 05:10:45 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:26.888 05:10:45 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:26.888 05:10:45 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:26.888 05:10:45 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # echo nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:26.888 05:10:45 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:26.888 05:10:45 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:26.888 05:10:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:26.888 05:10:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # echo nvme0 00:13:26.888 05:10:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # echo nvme3 00:13:26.888 05:10:45 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:26.888 05:10:45 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:26.888 05:10:45 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:26.888 05:10:45 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:26.888 05:10:45 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:26.888 05:10:45 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:26.888 05:10:45 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:26.888 05:10:45 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@197 -- # echo nvme2 00:13:26.888 05:10:45 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:26.888 05:10:45 -- nvme/functions.sh@206 -- # echo nvme1 00:13:26.888 05:10:45 -- nvme/functions.sh@207 -- # return 0 00:13:26.888 05:10:45 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:26.888 05:10:45 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:13:26.888 05:10:45 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:27.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.083 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.083 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.083 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.083 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:28.341 05:10:47 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:28.341 05:10:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:28.341 05:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.341 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.341 ************************************ 00:13:28.341 START TEST nvme_simple_copy 00:13:28.341 ************************************ 00:13:28.341 05:10:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:13:28.599 Initializing NVMe Controllers 00:13:28.599 Attaching to 0000:00:08.0 00:13:28.599 Controller supports SCC. Attached to 0000:00:08.0 00:13:28.599 Namespace ID: 1 size: 4GB 00:13:28.599 Initialization complete. 00:13:28.599 00:13:28.599 Controller QEMU NVMe Ctrl (12342 ) 00:13:28.599 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:28.599 Namespace Block Size:4096 00:13:28.599 Writing LBAs 0 to 63 with Random Data 00:13:28.599 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:28.599 LBAs matching Written Data: 64 00:13:28.599 00:13:28.599 real 0m0.344s 00:13:28.599 user 0m0.129s 00:13:28.599 sys 0m0.112s 00:13:28.599 05:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.599 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 ************************************ 00:13:28.599 END TEST nvme_simple_copy 00:13:28.599 ************************************ 00:13:28.599 ************************************ 00:13:28.599 END TEST nvme_scc 00:13:28.599 ************************************ 00:13:28.599 00:13:28.599 real 0m8.541s 00:13:28.599 user 0m1.380s 00:13:28.599 sys 0m2.234s 00:13:28.599 05:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.599 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 05:10:47 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:13:28.599 05:10:47 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:13:28.599 05:10:47 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:13:28.599 05:10:47 -- spdk/autotest.sh@238 -- # [[ 1 -eq 1 ]] 00:13:28.599 05:10:47 -- spdk/autotest.sh@239 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:28.599 05:10:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:28.599 05:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.599 05:10:47 -- common/autotest_common.sh@10 -- # set +x 00:13:28.599 ************************************ 00:13:28.599 START TEST nvme_fdp 00:13:28.599 ************************************ 00:13:28.599 05:10:47 -- common/autotest_common.sh@1104 -- # test/nvme/nvme_fdp.sh 00:13:28.857 * Looking for test storage... 00:13:28.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:28.857 05:10:47 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.857 05:10:47 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:28.857 05:10:47 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:28.857 05:10:47 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:28.857 05:10:47 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.857 05:10:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.857 05:10:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.857 05:10:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.857 05:10:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.857 05:10:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.857 05:10:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.857 05:10:47 -- paths/export.sh@5 -- # export PATH 00:13:28.857 05:10:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.857 05:10:47 -- nvme/functions.sh@10 -- # ctrls=() 00:13:28.857 05:10:47 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:28.857 05:10:47 -- nvme/functions.sh@11 -- # nvmes=() 00:13:28.857 05:10:47 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:28.857 05:10:47 -- nvme/functions.sh@12 -- # bdfs=() 00:13:28.857 05:10:47 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:28.857 05:10:47 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:28.858 05:10:47 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:28.858 05:10:47 -- nvme/functions.sh@14 -- # nvme_name= 00:13:28.858 05:10:47 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:28.858 05:10:47 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:29.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:29.424 Waiting for block devices as requested 00:13:29.424 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.682 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.682 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:13:29.941 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:13:35.215 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:13:35.215 05:10:53 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:35.215 05:10:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:35.215 05:10:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.215 05:10:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:13:35.215 05:10:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:13:35.215 05:10:53 -- scripts/common.sh@15 -- # local i 00:13:35.215 05:10:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:13:35.215 05:10:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.215 05:10:53 -- scripts/common.sh@24 -- # return 0 00:13:35.215 05:10:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:35.215 05:10:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:35.215 05:10:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@18 -- # shift 00:13:35.215 05:10:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.215 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.215 05:10:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.215 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.216 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.216 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.216 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.217 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.217 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.217 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:35.218 05:10:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:35.218 05:10:53 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:35.218 05:10:54 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.218 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.218 05:10:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:35.218 05:10:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:35.218 05:10:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:35.218 05:10:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:13:35.219 05:10:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:35.219 05:10:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.219 05:10:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:13:35.219 05:10:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:13:35.219 05:10:54 -- scripts/common.sh@15 -- # local i 00:13:35.219 05:10:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:13:35.219 05:10:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.219 05:10:54 -- scripts/common.sh@24 -- # return 0 00:13:35.219 05:10:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:35.219 05:10:54 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:35.219 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.219 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.219 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:35.219 05:10:54 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.219 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:35.220 05:10:54 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.220 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.220 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:35.221 05:10:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.221 05:10:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:35.221 05:10:54 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:35.221 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.221 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:13:35.221 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.221 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.221 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.222 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.222 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.222 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:35.223 05:10:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.223 05:10:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:13:35.223 05:10:54 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:13:35.223 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.223 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.223 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:13:35.223 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:13:35.223 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:13:35.224 05:10:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.224 05:10:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:13:35.224 05:10:54 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:13:35.224 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.224 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.224 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:13:35.224 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:13:35.224 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.225 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.225 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.225 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:13:35.226 05:10:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:35.226 05:10:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:35.226 05:10:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:13:35.226 05:10:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:35.226 05:10:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.226 05:10:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:13:35.226 05:10:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:13:35.226 05:10:54 -- scripts/common.sh@15 -- # local i 00:13:35.226 05:10:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:13:35.226 05:10:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.226 05:10:54 -- scripts/common.sh@24 -- # return 0 00:13:35.226 05:10:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:35.226 05:10:54 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:35.226 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.226 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.226 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.226 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.226 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.227 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:35.227 05:10:54 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:35.227 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:35.228 05:10:54 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.228 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.228 05:10:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:35.228 05:10:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.228 05:10:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:35.228 05:10:54 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:35.229 05:10:54 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:35.229 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.229 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.229 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:35.229 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:35.229 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:35.230 05:10:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:35.230 05:10:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:35.230 05:10:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:13:35.230 05:10:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:35.230 05:10:54 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:35.230 05:10:54 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:13:35.230 05:10:54 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:13:35.230 05:10:54 -- scripts/common.sh@15 -- # local i 00:13:35.230 05:10:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:13:35.230 05:10:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:35.230 05:10:54 -- scripts/common.sh@24 -- # return 0 00:13:35.230 05:10:54 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:35.230 05:10:54 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:35.230 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.230 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:35.230 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.230 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.230 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.231 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:35.231 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.231 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.232 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.232 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:35.232 05:10:54 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:35.233 05:10:54 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:35.233 05:10:54 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:13:35.233 05:10:54 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:13:35.233 05:10:54 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@18 -- # shift 00:13:35.233 05:10:54 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.233 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.233 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.233 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:35.234 05:10:54 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # IFS=: 00:13:35.234 05:10:54 -- nvme/functions.sh@21 -- # read -r reg val 00:13:35.234 05:10:54 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:13:35.234 05:10:54 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:35.234 05:10:54 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:35.234 05:10:54 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:13:35.234 05:10:54 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:35.234 05:10:54 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:35.234 05:10:54 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:35.234 05:10:54 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:35.234 05:10:54 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:35.234 05:10:54 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:35.234 05:10:54 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:35.234 05:10:54 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:35.234 05:10:54 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:35.234 05:10:54 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.234 05:10:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:35.234 05:10:54 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:35.234 05:10:54 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:35.234 05:10:54 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:35.234 05:10:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:35.234 05:10:54 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:35.234 05:10:54 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:35.234 05:10:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.234 05:10:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.234 05:10:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.234 05:10:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.234 05:10:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.234 05:10:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:35.234 05:10:54 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:35.234 05:10:54 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:35.234 05:10:54 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:35.234 05:10:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:35.234 05:10:54 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:35.493 05:10:54 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@76 -- # echo 0x88010 00:13:35.493 05:10:54 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:35.493 05:10:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.493 05:10:54 -- nvme/functions.sh@197 -- # echo nvme0 00:13:35.493 05:10:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.493 05:10:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:35.493 05:10:54 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:35.493 05:10:54 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:35.493 05:10:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:35.493 05:10:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.493 05:10:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.493 05:10:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.493 05:10:54 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:35.493 05:10:54 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:35.493 05:10:54 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:35.493 05:10:54 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:35.493 05:10:54 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:35.493 05:10:54 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:35.493 05:10:54 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:35.493 05:10:54 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:35.493 05:10:54 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:35.493 05:10:54 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:35.493 05:10:54 -- nvme/functions.sh@204 -- # trap - ERR 00:13:35.493 05:10:54 -- nvme/functions.sh@204 -- # print_backtrace 00:13:35.493 05:10:54 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:35.493 05:10:54 -- common/autotest_common.sh@1132 -- # return 0 00:13:35.493 05:10:54 -- nvme/functions.sh@204 -- # trap - ERR 00:13:35.493 05:10:54 -- nvme/functions.sh@204 -- # print_backtrace 00:13:35.493 05:10:54 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:35.493 05:10:54 -- common/autotest_common.sh@1132 -- # return 0 00:13:35.493 05:10:54 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:35.493 05:10:54 -- nvme/functions.sh@206 -- # echo nvme0 00:13:35.493 05:10:54 -- nvme/functions.sh@207 -- # return 0 00:13:35.493 05:10:54 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:13:35.493 05:10:54 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:13:35.493 05:10:54 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:36.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:36.687 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.687 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.687 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.687 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:36.944 05:10:55 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:36.944 05:10:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:36.944 05:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.944 05:10:55 -- common/autotest_common.sh@10 -- # set +x 00:13:36.944 ************************************ 00:13:36.944 START TEST nvme_flexible_data_placement 00:13:36.944 ************************************ 00:13:36.944 05:10:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:13:37.204 Initializing NVMe Controllers 00:13:37.204 Attaching to 0000:00:09.0 00:13:37.204 Controller supports FDP Attached to 0000:00:09.0 00:13:37.204 Namespace ID: 1 Endurance Group ID: 1 00:13:37.204 Initialization complete. 00:13:37.204 00:13:37.204 ================================== 00:13:37.204 == FDP tests for Namespace: #01 == 00:13:37.204 ================================== 00:13:37.204 00:13:37.204 Get Feature: FDP: 00:13:37.204 ================= 00:13:37.204 Enabled: Yes 00:13:37.204 FDP configuration Index: 0 00:13:37.204 00:13:37.204 FDP configurations log page 00:13:37.204 =========================== 00:13:37.204 Number of FDP configurations: 1 00:13:37.204 Version: 0 00:13:37.204 Size: 112 00:13:37.204 FDP Configuration Descriptor: 0 00:13:37.204 Descriptor Size: 96 00:13:37.204 Reclaim Group Identifier format: 2 00:13:37.204 FDP Volatile Write Cache: Not Present 00:13:37.204 FDP Configuration: Valid 00:13:37.204 Vendor Specific Size: 0 00:13:37.204 Number of Reclaim Groups: 2 00:13:37.204 Number of Recalim Unit Handles: 8 00:13:37.204 Max Placement Identifiers: 128 00:13:37.204 Number of Namespaces Suppprted: 256 00:13:37.204 Reclaim unit Nominal Size: 6000000 bytes 00:13:37.204 Estimated Reclaim Unit Time Limit: Not Reported 00:13:37.204 RUH Desc #000: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #001: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #002: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #003: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #004: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #005: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #006: RUH Type: Initially Isolated 00:13:37.204 RUH Desc #007: RUH Type: Initially Isolated 00:13:37.204 00:13:37.204 FDP reclaim unit handle usage log page 00:13:37.204 ====================================== 00:13:37.204 Number of Reclaim Unit Handles: 8 00:13:37.204 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:37.204 RUH Usage Desc #001: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #002: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #003: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #004: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #005: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #006: RUH Attributes: Unused 00:13:37.204 RUH Usage Desc #007: RUH Attributes: Unused 00:13:37.204 00:13:37.204 FDP statistics log page 00:13:37.204 ======================= 00:13:37.204 Host bytes with metadata written: 764522496 00:13:37.204 Media bytes with metadata written: 764645376 00:13:37.204 Media bytes erased: 0 00:13:37.204 00:13:37.204 FDP Reclaim unit handle status 00:13:37.204 ============================== 00:13:37.204 Number of RUHS descriptors: 2 00:13:37.204 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000026e5 00:13:37.204 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:37.204 00:13:37.204 FDP write on placement id: 0 success 00:13:37.204 00:13:37.204 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:37.204 00:13:37.204 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:37.204 00:13:37.204 Get Feature: FDP Events for Placement handle: #0 00:13:37.204 ======================== 00:13:37.204 Number of FDP Events: 6 00:13:37.204 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:37.204 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:37.204 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:37.204 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:37.204 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:37.204 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:37.204 00:13:37.204 FDP events log page 00:13:37.204 =================== 00:13:37.204 Number of FDP events: 1 00:13:37.204 FDP Event #0: 00:13:37.204 Event Type: RU Not Written to Capacity 00:13:37.204 Placement Identifier: Valid 00:13:37.204 NSID: Valid 00:13:37.204 Location: Valid 00:13:37.204 Placement Identifier: 0 00:13:37.204 Event Timestamp: 11 00:13:37.204 Namespace Identifier: 1 00:13:37.204 Reclaim Group Identifier: 0 00:13:37.205 Reclaim Unit Handle Identifier: 0 00:13:37.205 00:13:37.205 FDP test passed 00:13:37.205 00:13:37.205 real 0m0.324s 00:13:37.205 user 0m0.105s 00:13:37.205 sys 0m0.117s 00:13:37.205 05:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.205 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.205 ************************************ 00:13:37.205 END TEST nvme_flexible_data_placement 00:13:37.205 ************************************ 00:13:37.205 00:13:37.205 real 0m8.512s 00:13:37.205 user 0m1.350s 00:13:37.205 sys 0m2.276s 00:13:37.205 05:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.205 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.205 ************************************ 00:13:37.205 END TEST nvme_fdp 00:13:37.205 ************************************ 00:13:37.205 05:10:56 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:13:37.205 05:10:56 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:37.205 05:10:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:37.205 05:10:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:37.205 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.205 ************************************ 00:13:37.205 START TEST nvme_rpc 00:13:37.205 ************************************ 00:13:37.205 05:10:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:37.464 * Looking for test storage... 00:13:37.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:37.464 05:10:56 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:37.464 05:10:56 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:37.464 05:10:56 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:37.464 05:10:56 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:37.464 05:10:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:37.464 05:10:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:37.464 05:10:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:37.464 05:10:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:37.464 05:10:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:37.464 05:10:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:37.464 05:10:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:13:37.464 05:10:56 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67646 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:37.464 05:10:56 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67646 00:13:37.464 05:10:56 -- common/autotest_common.sh@819 -- # '[' -z 67646 ']' 00:13:37.464 05:10:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.464 05:10:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.464 05:10:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.464 05:10:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.464 05:10:56 -- common/autotest_common.sh@10 -- # set +x 00:13:37.722 [2024-07-26 05:10:56.588398] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:37.722 [2024-07-26 05:10:56.588560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67646 ] 00:13:37.722 [2024-07-26 05:10:56.776909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.289 [2024-07-26 05:10:57.125939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.289 [2024-07-26 05:10:57.126302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.289 [2024-07-26 05:10:57.126320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.223 05:10:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.223 05:10:58 -- common/autotest_common.sh@852 -- # return 0 00:13:39.223 05:10:58 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:13:39.481 Nvme0n1 00:13:39.481 05:10:58 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:39.481 05:10:58 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:39.481 request: 00:13:39.481 { 00:13:39.481 "filename": "non_existing_file", 00:13:39.481 "bdev_name": "Nvme0n1", 00:13:39.481 "method": "bdev_nvme_apply_firmware", 00:13:39.481 "req_id": 1 00:13:39.481 } 00:13:39.481 Got JSON-RPC error response 00:13:39.481 response: 00:13:39.481 { 00:13:39.481 "code": -32603, 00:13:39.481 "message": "open file failed." 00:13:39.481 } 00:13:39.481 05:10:58 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:39.481 05:10:58 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:39.481 05:10:58 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:39.759 05:10:58 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:39.759 05:10:58 -- nvme/nvme_rpc.sh@40 -- # killprocess 67646 00:13:39.759 05:10:58 -- common/autotest_common.sh@926 -- # '[' -z 67646 ']' 00:13:39.759 05:10:58 -- common/autotest_common.sh@930 -- # kill -0 67646 00:13:39.759 05:10:58 -- common/autotest_common.sh@931 -- # uname 00:13:39.759 05:10:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.759 05:10:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67646 00:13:39.759 05:10:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:39.759 killing process with pid 67646 00:13:39.759 05:10:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:39.759 05:10:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67646' 00:13:39.759 05:10:58 -- common/autotest_common.sh@945 -- # kill 67646 00:13:39.759 05:10:58 -- common/autotest_common.sh@950 -- # wait 67646 00:13:42.324 00:13:42.324 real 0m4.820s 00:13:42.324 user 0m8.757s 00:13:42.324 sys 0m0.713s 00:13:42.324 05:11:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.324 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:13:42.324 ************************************ 00:13:42.324 END TEST nvme_rpc 00:13:42.324 ************************************ 00:13:42.324 05:11:01 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:42.324 05:11:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:42.324 05:11:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.324 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:13:42.324 ************************************ 00:13:42.324 START TEST nvme_rpc_timeouts 00:13:42.324 ************************************ 00:13:42.324 05:11:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:42.324 * Looking for test storage... 00:13:42.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67729 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67729 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67759 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:42.324 05:11:01 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67759 00:13:42.324 05:11:01 -- common/autotest_common.sh@819 -- # '[' -z 67759 ']' 00:13:42.324 05:11:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.324 05:11:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.324 05:11:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.324 05:11:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.324 05:11:01 -- common/autotest_common.sh@10 -- # set +x 00:13:42.324 [2024-07-26 05:11:01.335836] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:42.324 [2024-07-26 05:11:01.335949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67759 ] 00:13:42.581 [2024-07-26 05:11:01.495782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.840 [2024-07-26 05:11:01.726774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.840 [2024-07-26 05:11:01.727357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.840 [2024-07-26 05:11:01.727386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.773 05:11:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.773 05:11:02 -- common/autotest_common.sh@852 -- # return 0 00:13:43.773 05:11:02 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:43.773 Checking default timeout settings: 00:13:43.773 05:11:02 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:44.339 Making settings changes with rpc: 00:13:44.339 05:11:03 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:44.339 05:11:03 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:44.339 Check default vs. modified settings: 00:13:44.339 05:11:03 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:44.339 05:11:03 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:44.605 Setting action_on_timeout is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:44.605 Setting timeout_us is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:44.605 Setting timeout_admin_us is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67729 /tmp/settings_modified_67729 00:13:44.605 05:11:03 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67759 00:13:44.605 05:11:03 -- common/autotest_common.sh@926 -- # '[' -z 67759 ']' 00:13:44.605 05:11:03 -- common/autotest_common.sh@930 -- # kill -0 67759 00:13:44.605 05:11:03 -- common/autotest_common.sh@931 -- # uname 00:13:44.605 05:11:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.605 05:11:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67759 00:13:44.605 05:11:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.605 05:11:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.605 killing process with pid 67759 00:13:44.605 05:11:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67759' 00:13:44.605 05:11:03 -- common/autotest_common.sh@945 -- # kill 67759 00:13:44.605 05:11:03 -- common/autotest_common.sh@950 -- # wait 67759 00:13:47.138 RPC TIMEOUT SETTING TEST PASSED. 00:13:47.138 05:11:06 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:47.138 00:13:47.138 real 0m4.961s 00:13:47.138 user 0m9.297s 00:13:47.138 sys 0m0.704s 00:13:47.138 05:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.138 ************************************ 00:13:47.138 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.138 END TEST nvme_rpc_timeouts 00:13:47.138 ************************************ 00:13:47.138 05:11:06 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:13:47.138 05:11:06 -- spdk/autotest.sh@255 -- # [[ 1 -eq 1 ]] 00:13:47.138 05:11:06 -- spdk/autotest.sh@256 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.138 05:11:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:47.138 05:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.138 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.138 ************************************ 00:13:47.138 START TEST nvme_xnvme 00:13:47.138 ************************************ 00:13:47.138 05:11:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:47.396 * Looking for test storage... 00:13:47.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:47.396 05:11:06 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.396 05:11:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.396 05:11:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.396 05:11:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.396 05:11:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.396 05:11:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.397 05:11:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.397 05:11:06 -- paths/export.sh@5 -- # export PATH 00:13:47.397 05:11:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:47.397 05:11:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:47.397 05:11:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.397 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.397 ************************************ 00:13:47.397 START TEST xnvme_to_malloc_dd_copy 00:13:47.397 ************************************ 00:13:47.397 05:11:06 -- common/autotest_common.sh@1104 -- # malloc_to_xnvme_copy 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:47.397 05:11:06 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:47.397 05:11:06 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:47.397 05:11:06 -- dd/common.sh@191 -- # return 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@18 -- # local io 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:47.397 05:11:06 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:47.397 05:11:06 -- dd/common.sh@31 -- # xtrace_disable 00:13:47.397 05:11:06 -- common/autotest_common.sh@10 -- # set +x 00:13:47.397 { 00:13:47.397 "subsystems": [ 00:13:47.397 { 00:13:47.397 "subsystem": "bdev", 00:13:47.397 "config": [ 00:13:47.397 { 00:13:47.397 "params": { 00:13:47.397 "block_size": 512, 00:13:47.397 "num_blocks": 2097152, 00:13:47.397 "name": "malloc0" 00:13:47.397 }, 00:13:47.397 "method": "bdev_malloc_create" 00:13:47.397 }, 00:13:47.397 { 00:13:47.397 "params": { 00:13:47.397 "io_mechanism": "libaio", 00:13:47.397 "filename": "/dev/nullb0", 00:13:47.397 "name": "null0" 00:13:47.397 }, 00:13:47.397 "method": "bdev_xnvme_create" 00:13:47.397 }, 00:13:47.397 { 00:13:47.397 "method": "bdev_wait_for_examine" 00:13:47.397 } 00:13:47.397 ] 00:13:47.397 } 00:13:47.397 ] 00:13:47.397 } 00:13:47.397 [2024-07-26 05:11:06.413688] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:47.397 [2024-07-26 05:11:06.414019] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67892 ] 00:13:47.656 [2024-07-26 05:11:06.596154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.914 [2024-07-26 05:11:06.820455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.431  Copying: 265/1024 [MB] (265 MBps) Copying: 531/1024 [MB] (265 MBps) Copying: 797/1024 [MB] (266 MBps) Copying: 1024/1024 [MB] (average 266 MBps) 00:13:57.431 00:13:57.431 05:11:16 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:57.431 05:11:16 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:57.431 05:11:16 -- dd/common.sh@31 -- # xtrace_disable 00:13:57.431 05:11:16 -- common/autotest_common.sh@10 -- # set +x 00:13:57.431 { 00:13:57.431 "subsystems": [ 00:13:57.431 { 00:13:57.431 "subsystem": "bdev", 00:13:57.431 "config": [ 00:13:57.431 { 00:13:57.431 "params": { 00:13:57.431 "block_size": 512, 00:13:57.431 "num_blocks": 2097152, 00:13:57.431 "name": "malloc0" 00:13:57.431 }, 00:13:57.431 "method": "bdev_malloc_create" 00:13:57.431 }, 00:13:57.431 { 00:13:57.431 "params": { 00:13:57.431 "io_mechanism": "libaio", 00:13:57.431 "filename": "/dev/nullb0", 00:13:57.431 "name": "null0" 00:13:57.431 }, 00:13:57.431 "method": "bdev_xnvme_create" 00:13:57.431 }, 00:13:57.431 { 00:13:57.431 "method": "bdev_wait_for_examine" 00:13:57.431 } 00:13:57.431 ] 00:13:57.431 } 00:13:57.431 ] 00:13:57.431 } 00:13:57.431 [2024-07-26 05:11:16.335692] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:57.431 [2024-07-26 05:11:16.335864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68008 ] 00:13:57.431 [2024-07-26 05:11:16.514400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.689 [2024-07-26 05:11:16.731397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.419  Copying: 270/1024 [MB] (270 MBps) Copying: 543/1024 [MB] (272 MBps) Copying: 816/1024 [MB] (273 MBps) Copying: 1024/1024 [MB] (average 271 MBps) 00:14:07.420 00:14:07.420 05:11:26 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:07.420 05:11:26 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:07.420 05:11:26 -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:07.420 05:11:26 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:07.420 05:11:26 -- dd/common.sh@31 -- # xtrace_disable 00:14:07.420 05:11:26 -- common/autotest_common.sh@10 -- # set +x 00:14:07.420 { 00:14:07.420 "subsystems": [ 00:14:07.420 { 00:14:07.420 "subsystem": "bdev", 00:14:07.420 "config": [ 00:14:07.420 { 00:14:07.420 "params": { 00:14:07.420 "block_size": 512, 00:14:07.420 "num_blocks": 2097152, 00:14:07.420 "name": "malloc0" 00:14:07.420 }, 00:14:07.420 "method": "bdev_malloc_create" 00:14:07.420 }, 00:14:07.420 { 00:14:07.420 "params": { 00:14:07.420 "io_mechanism": "io_uring", 00:14:07.420 "filename": "/dev/nullb0", 00:14:07.420 "name": "null0" 00:14:07.420 }, 00:14:07.420 "method": "bdev_xnvme_create" 00:14:07.420 }, 00:14:07.420 { 00:14:07.420 "method": "bdev_wait_for_examine" 00:14:07.420 } 00:14:07.420 ] 00:14:07.420 } 00:14:07.420 ] 00:14:07.420 } 00:14:07.420 [2024-07-26 05:11:26.183613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:07.420 [2024-07-26 05:11:26.183768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68123 ] 00:14:07.420 [2024-07-26 05:11:26.364326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.679 [2024-07-26 05:11:26.585191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.981  Copying: 280/1024 [MB] (280 MBps) Copying: 559/1024 [MB] (279 MBps) Copying: 839/1024 [MB] (279 MBps) Copying: 1024/1024 [MB] (average 280 MBps) 00:14:16.981 00:14:16.981 05:11:35 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:16.981 05:11:35 -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:16.981 05:11:35 -- dd/common.sh@31 -- # xtrace_disable 00:14:16.981 05:11:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.981 { 00:14:16.981 "subsystems": [ 00:14:16.981 { 00:14:16.981 "subsystem": "bdev", 00:14:16.981 "config": [ 00:14:16.981 { 00:14:16.981 "params": { 00:14:16.981 "block_size": 512, 00:14:16.981 "num_blocks": 2097152, 00:14:16.981 "name": "malloc0" 00:14:16.981 }, 00:14:16.981 "method": "bdev_malloc_create" 00:14:16.981 }, 00:14:16.981 { 00:14:16.981 "params": { 00:14:16.981 "io_mechanism": "io_uring", 00:14:16.981 "filename": "/dev/nullb0", 00:14:16.981 "name": "null0" 00:14:16.981 }, 00:14:16.981 "method": "bdev_xnvme_create" 00:14:16.981 }, 00:14:16.981 { 00:14:16.981 "method": "bdev_wait_for_examine" 00:14:16.981 } 00:14:16.981 ] 00:14:16.981 } 00:14:16.981 ] 00:14:16.981 } 00:14:16.981 [2024-07-26 05:11:35.900021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:16.981 [2024-07-26 05:11:35.900174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68227 ] 00:14:16.981 [2024-07-26 05:11:36.077972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.239 [2024-07-26 05:11:36.300318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.809  Copying: 289/1024 [MB] (289 MBps) Copying: 578/1024 [MB] (289 MBps) Copying: 865/1024 [MB] (286 MBps) Copying: 1024/1024 [MB] (average 287 MBps) 00:14:26.809 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:26.809 05:11:45 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:26.809 ************************************ 00:14:26.809 END TEST xnvme_to_malloc_dd_copy 00:14:26.809 ************************************ 00:14:26.809 00:14:26.809 real 0m39.217s 00:14:26.809 user 0m34.514s 00:14:26.809 sys 0m4.181s 00:14:26.809 05:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.809 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:26.809 05:11:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:26.809 05:11:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:26.809 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 ************************************ 00:14:26.809 START TEST xnvme_bdevperf 00:14:26.809 ************************************ 00:14:26.809 05:11:45 -- common/autotest_common.sh@1104 -- # xnvme_bdevperf 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:26.809 05:11:45 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:26.809 05:11:45 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:26.809 05:11:45 -- dd/common.sh@191 -- # return 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@60 -- # local io 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:26.809 05:11:45 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:26.809 05:11:45 -- dd/common.sh@31 -- # xtrace_disable 00:14:26.809 05:11:45 -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 { 00:14:26.809 "subsystems": [ 00:14:26.809 { 00:14:26.809 "subsystem": "bdev", 00:14:26.809 "config": [ 00:14:26.809 { 00:14:26.809 "params": { 00:14:26.809 "io_mechanism": "libaio", 00:14:26.809 "filename": "/dev/nullb0", 00:14:26.809 "name": "null0" 00:14:26.809 }, 00:14:26.809 "method": "bdev_xnvme_create" 00:14:26.809 }, 00:14:26.809 { 00:14:26.810 "method": "bdev_wait_for_examine" 00:14:26.810 } 00:14:26.810 ] 00:14:26.810 } 00:14:26.810 ] 00:14:26.810 } 00:14:26.810 [2024-07-26 05:11:45.689657] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:26.810 [2024-07-26 05:11:45.689826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68364 ] 00:14:26.810 [2024-07-26 05:11:45.873456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.068 [2024-07-26 05:11:46.094309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.633 Running I/O for 5 seconds... 00:14:32.896 00:14:32.896 Latency(us) 00:14:32.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.896 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:32.896 null0 : 5.00 164506.26 642.60 0.00 0.00 386.77 115.57 1162.48 00:14:32.896 =================================================================================================================== 00:14:32.896 Total : 164506.26 642.60 0.00 0.00 386.77 115.57 1162.48 00:14:33.831 05:11:52 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:33.831 05:11:52 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:33.831 05:11:52 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:33.831 05:11:52 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:33.831 05:11:52 -- dd/common.sh@31 -- # xtrace_disable 00:14:33.831 05:11:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.831 { 00:14:33.831 "subsystems": [ 00:14:33.831 { 00:14:33.831 "subsystem": "bdev", 00:14:33.831 "config": [ 00:14:33.831 { 00:14:33.831 "params": { 00:14:33.831 "io_mechanism": "io_uring", 00:14:33.831 "filename": "/dev/nullb0", 00:14:33.831 "name": "null0" 00:14:33.831 }, 00:14:33.831 "method": "bdev_xnvme_create" 00:14:33.831 }, 00:14:33.831 { 00:14:33.831 "method": "bdev_wait_for_examine" 00:14:33.831 } 00:14:33.831 ] 00:14:33.831 } 00:14:33.831 ] 00:14:33.831 } 00:14:33.831 [2024-07-26 05:11:52.852749] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:33.831 [2024-07-26 05:11:52.852910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68446 ] 00:14:34.089 [2024-07-26 05:11:53.028284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.347 [2024-07-26 05:11:53.251154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.606 Running I/O for 5 seconds... 00:14:39.868 00:14:39.868 Latency(us) 00:14:39.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.868 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.868 null0 : 5.00 211863.53 827.59 0.00 0.00 299.94 222.35 2028.50 00:14:39.868 =================================================================================================================== 00:14:39.868 Total : 211863.53 827.59 0.00 0.00 299.94 222.35 2028.50 00:14:40.802 05:11:59 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:40.802 05:11:59 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:41.060 00:14:41.060 real 0m14.375s 00:14:41.060 user 0m10.925s 00:14:41.060 sys 0m3.225s 00:14:41.060 05:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.060 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:14:41.060 ************************************ 00:14:41.060 END TEST xnvme_bdevperf 00:14:41.060 ************************************ 00:14:41.060 00:14:41.060 real 0m53.812s 00:14:41.060 user 0m45.517s 00:14:41.060 sys 0m7.542s 00:14:41.060 05:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.060 ************************************ 00:14:41.060 END TEST nvme_xnvme 00:14:41.060 ************************************ 00:14:41.060 05:11:59 -- common/autotest_common.sh@10 -- # set +x 00:14:41.060 05:12:00 -- spdk/autotest.sh@257 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:41.060 05:12:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.060 05:12:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.060 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:14:41.060 ************************************ 00:14:41.060 START TEST blockdev_xnvme 00:14:41.060 ************************************ 00:14:41.060 05:12:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:41.060 * Looking for test storage... 00:14:41.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:41.060 05:12:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:41.060 05:12:00 -- bdev/nbd_common.sh@6 -- # set -e 00:14:41.060 05:12:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:41.060 05:12:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:41.060 05:12:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:41.060 05:12:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:41.060 05:12:00 -- bdev/blockdev.sh@18 -- # : 00:14:41.060 05:12:00 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:14:41.060 05:12:00 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:14:41.060 05:12:00 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:14:41.060 05:12:00 -- bdev/blockdev.sh@672 -- # uname -s 00:14:41.060 05:12:00 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:14:41.060 05:12:00 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:14:41.060 05:12:00 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:14:41.060 05:12:00 -- bdev/blockdev.sh@681 -- # crypto_device= 00:14:41.060 05:12:00 -- bdev/blockdev.sh@682 -- # dek= 00:14:41.060 05:12:00 -- bdev/blockdev.sh@683 -- # env_ctx= 00:14:41.060 05:12:00 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:14:41.060 05:12:00 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:14:41.060 05:12:00 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:14:41.060 05:12:00 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:14:41.060 05:12:00 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:14:41.060 05:12:00 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=68590 00:14:41.060 05:12:00 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:41.060 05:12:00 -- bdev/blockdev.sh@47 -- # waitforlisten 68590 00:14:41.060 05:12:00 -- common/autotest_common.sh@819 -- # '[' -z 68590 ']' 00:14:41.060 05:12:00 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:41.060 05:12:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.060 05:12:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.060 05:12:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.060 05:12:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.060 05:12:00 -- common/autotest_common.sh@10 -- # set +x 00:14:41.319 [2024-07-26 05:12:00.266175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:41.319 [2024-07-26 05:12:00.267289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68590 ] 00:14:41.578 [2024-07-26 05:12:00.449942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.578 [2024-07-26 05:12:00.668720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:41.578 [2024-07-26 05:12:00.668916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.952 05:12:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.952 05:12:01 -- common/autotest_common.sh@852 -- # return 0 00:14:42.952 05:12:01 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:14:42.952 05:12:01 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:14:42.952 05:12:01 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:14:42.952 05:12:01 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:14:42.952 05:12:01 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:43.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:43.469 Waiting for block devices as requested 00:14:43.469 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.469 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.738 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:14:43.738 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:14:49.015 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:14:49.015 05:12:07 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:14:49.015 05:12:07 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:14:49.015 05:12:07 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:14:49.015 05:12:07 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:14:49.015 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.015 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0c0n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme0c0n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.015 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.015 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:14:49.015 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.015 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.015 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:14:49.016 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:14:49.016 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.016 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:14:49.016 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:14:49.016 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.016 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme2n1 00:14:49.016 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme2n1 00:14:49.016 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:14:49.016 05:12:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme3n1 00:14:49.016 05:12:07 -- common/autotest_common.sh@1647 -- # local device=nvme3n1 00:14:49.016 05:12:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:49.016 05:12:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:49.016 05:12:07 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:14:49.016 05:12:07 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:14:49.016 05:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:07 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:49.016 nvme0n1 00:14:49.016 nvme1n1 00:14:49.016 nvme1n2 00:14:49.016 nvme1n3 00:14:49.016 nvme2n1 00:14:49.016 nvme3n1 00:14:49.016 05:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:14:49.016 05:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@738 -- # cat 00:14:49.016 05:12:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:14:49.016 05:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:14:49.016 05:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:49.016 05:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:07 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:07 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:14:49.016 05:12:07 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:14:49.016 05:12:08 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:14:49.016 05:12:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.016 05:12:08 -- common/autotest_common.sh@10 -- # set +x 00:14:49.016 05:12:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.016 05:12:08 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:14:49.016 05:12:08 -- bdev/blockdev.sh@747 -- # jq -r .name 00:14:49.016 05:12:08 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "42c3ede9-1061-4f8a-bd3f-3dfd00de9333"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "42c3ede9-1061-4f8a-bd3f-3dfd00de9333",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c9a79b89-0342-4354-afff-11aa8f0e3e99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c9a79b89-0342-4354-afff-11aa8f0e3e99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "47e6fe01-b4ab-47a1-95ea-f97dd88da531"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47e6fe01-b4ab-47a1-95ea-f97dd88da531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d9027314-326b-4ed0-bbd1-ef7d156db897"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9027314-326b-4ed0-bbd1-ef7d156db897",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f679f66e-21b8-402f-963e-179968803b35"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f679f66e-21b8-402f-963e-179968803b35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0dea93fe-1e75-41f2-ae8d-67e00aa84fc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0dea93fe-1e75-41f2-ae8d-67e00aa84fc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:49.016 05:12:08 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:14:49.016 05:12:08 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:14:49.016 05:12:08 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:14:49.016 05:12:08 -- bdev/blockdev.sh@752 -- # killprocess 68590 00:14:49.016 05:12:08 -- common/autotest_common.sh@926 -- # '[' -z 68590 ']' 00:14:49.016 05:12:08 -- common/autotest_common.sh@930 -- # kill -0 68590 00:14:49.016 05:12:08 -- common/autotest_common.sh@931 -- # uname 00:14:49.016 05:12:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:49.016 05:12:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68590 00:14:49.275 killing process with pid 68590 00:14:49.275 05:12:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:49.275 05:12:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:49.275 05:12:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68590' 00:14:49.275 05:12:08 -- common/autotest_common.sh@945 -- # kill 68590 00:14:49.275 05:12:08 -- common/autotest_common.sh@950 -- # wait 68590 00:14:51.808 05:12:10 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:51.808 05:12:10 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:51.808 05:12:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:14:51.808 05:12:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.808 05:12:10 -- common/autotest_common.sh@10 -- # set +x 00:14:51.808 ************************************ 00:14:51.808 START TEST bdev_hello_world 00:14:51.808 ************************************ 00:14:51.808 05:12:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:51.808 [2024-07-26 05:12:10.639594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:51.808 [2024-07-26 05:12:10.639750] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68990 ] 00:14:51.808 [2024-07-26 05:12:10.821342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.066 [2024-07-26 05:12:11.054438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.633 [2024-07-26 05:12:11.539366] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:52.633 [2024-07-26 05:12:11.539414] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:52.633 [2024-07-26 05:12:11.539431] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:52.633 [2024-07-26 05:12:11.541449] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:52.633 [2024-07-26 05:12:11.541753] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:52.633 [2024-07-26 05:12:11.541771] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:52.633 [2024-07-26 05:12:11.541918] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:52.633 00:14:52.633 [2024-07-26 05:12:11.541936] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:54.013 00:14:54.013 real 0m2.256s 00:14:54.013 user 0m1.888s 00:14:54.013 sys 0m0.253s 00:14:54.013 05:12:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.013 ************************************ 00:14:54.013 END TEST bdev_hello_world 00:14:54.013 ************************************ 00:14:54.013 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.013 05:12:12 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:14:54.013 05:12:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.013 05:12:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.013 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.013 ************************************ 00:14:54.013 START TEST bdev_bounds 00:14:54.013 ************************************ 00:14:54.013 05:12:12 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:14:54.013 05:12:12 -- bdev/blockdev.sh@288 -- # bdevio_pid=69032 00:14:54.013 Process bdevio pid: 69032 00:14:54.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.013 05:12:12 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:54.013 05:12:12 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 69032' 00:14:54.013 05:12:12 -- bdev/blockdev.sh@291 -- # waitforlisten 69032 00:14:54.013 05:12:12 -- common/autotest_common.sh@819 -- # '[' -z 69032 ']' 00:14:54.013 05:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.013 05:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.013 05:12:12 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:54.013 05:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.013 05:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.013 05:12:12 -- common/autotest_common.sh@10 -- # set +x 00:14:54.013 [2024-07-26 05:12:12.955060] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:54.013 [2024-07-26 05:12:12.955706] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69032 ] 00:14:54.272 [2024-07-26 05:12:13.137594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:54.272 [2024-07-26 05:12:13.362889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.272 [2024-07-26 05:12:13.363023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.272 [2024-07-26 05:12:13.363056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.647 05:12:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.647 05:12:14 -- common/autotest_common.sh@852 -- # return 0 00:14:55.647 05:12:14 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:55.647 I/O targets: 00:14:55.647 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:55.647 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.647 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.647 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:55.647 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:55.647 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:55.647 00:14:55.647 00:14:55.647 CUnit - A unit testing framework for C - Version 2.1-3 00:14:55.647 http://cunit.sourceforge.net/ 00:14:55.647 00:14:55.647 00:14:55.647 Suite: bdevio tests on: nvme3n1 00:14:55.647 Test: blockdev write read block ...passed 00:14:55.647 Test: blockdev write zeroes read block ...passed 00:14:55.647 Test: blockdev write zeroes read no split ...passed 00:14:55.647 Test: blockdev write zeroes read split ...passed 00:14:55.647 Test: blockdev write zeroes read split partial ...passed 00:14:55.647 Test: blockdev reset ...passed 00:14:55.647 Test: blockdev write read 8 blocks ...passed 00:14:55.647 Test: blockdev write read size > 128k ...passed 00:14:55.647 Test: blockdev write read invalid size ...passed 00:14:55.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.647 Test: blockdev write read max offset ...passed 00:14:55.647 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.647 Test: blockdev writev readv 8 blocks ...passed 00:14:55.647 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.647 Test: blockdev writev readv block ...passed 00:14:55.647 Test: blockdev writev readv size > 128k ...passed 00:14:55.647 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.647 Test: blockdev comparev and writev ...passed 00:14:55.647 Test: blockdev nvme passthru rw ...passed 00:14:55.647 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.647 Test: blockdev nvme admin passthru ...passed 00:14:55.647 Test: blockdev copy ...passed 00:14:55.647 Suite: bdevio tests on: nvme2n1 00:14:55.647 Test: blockdev write read block ...passed 00:14:55.647 Test: blockdev write zeroes read block ...passed 00:14:55.647 Test: blockdev write zeroes read no split ...passed 00:14:55.647 Test: blockdev write zeroes read split ...passed 00:14:55.906 Test: blockdev write zeroes read split partial ...passed 00:14:55.906 Test: blockdev reset ...passed 00:14:55.906 Test: blockdev write read 8 blocks ...passed 00:14:55.906 Test: blockdev write read size > 128k ...passed 00:14:55.906 Test: blockdev write read invalid size ...passed 00:14:55.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.906 Test: blockdev write read max offset ...passed 00:14:55.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.906 Test: blockdev writev readv 8 blocks ...passed 00:14:55.906 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.906 Test: blockdev writev readv block ...passed 00:14:55.906 Test: blockdev writev readv size > 128k ...passed 00:14:55.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.906 Test: blockdev comparev and writev ...passed 00:14:55.906 Test: blockdev nvme passthru rw ...passed 00:14:55.906 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.906 Test: blockdev nvme admin passthru ...passed 00:14:55.906 Test: blockdev copy ...passed 00:14:55.906 Suite: bdevio tests on: nvme1n3 00:14:55.906 Test: blockdev write read block ...passed 00:14:55.906 Test: blockdev write zeroes read block ...passed 00:14:55.906 Test: blockdev write zeroes read no split ...passed 00:14:55.906 Test: blockdev write zeroes read split ...passed 00:14:55.906 Test: blockdev write zeroes read split partial ...passed 00:14:55.906 Test: blockdev reset ...passed 00:14:55.906 Test: blockdev write read 8 blocks ...passed 00:14:55.906 Test: blockdev write read size > 128k ...passed 00:14:55.906 Test: blockdev write read invalid size ...passed 00:14:55.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.906 Test: blockdev write read max offset ...passed 00:14:55.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.906 Test: blockdev writev readv 8 blocks ...passed 00:14:55.906 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.906 Test: blockdev writev readv block ...passed 00:14:55.906 Test: blockdev writev readv size > 128k ...passed 00:14:55.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.906 Test: blockdev comparev and writev ...passed 00:14:55.906 Test: blockdev nvme passthru rw ...passed 00:14:55.906 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.906 Test: blockdev nvme admin passthru ...passed 00:14:55.906 Test: blockdev copy ...passed 00:14:55.906 Suite: bdevio tests on: nvme1n2 00:14:55.906 Test: blockdev write read block ...passed 00:14:55.906 Test: blockdev write zeroes read block ...passed 00:14:55.906 Test: blockdev write zeroes read no split ...passed 00:14:55.906 Test: blockdev write zeroes read split ...passed 00:14:55.906 Test: blockdev write zeroes read split partial ...passed 00:14:55.906 Test: blockdev reset ...passed 00:14:55.906 Test: blockdev write read 8 blocks ...passed 00:14:55.906 Test: blockdev write read size > 128k ...passed 00:14:55.906 Test: blockdev write read invalid size ...passed 00:14:55.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.906 Test: blockdev write read max offset ...passed 00:14:55.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.906 Test: blockdev writev readv 8 blocks ...passed 00:14:55.906 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.906 Test: blockdev writev readv block ...passed 00:14:55.906 Test: blockdev writev readv size > 128k ...passed 00:14:55.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:55.906 Test: blockdev comparev and writev ...passed 00:14:55.906 Test: blockdev nvme passthru rw ...passed 00:14:55.906 Test: blockdev nvme passthru vendor specific ...passed 00:14:55.906 Test: blockdev nvme admin passthru ...passed 00:14:55.906 Test: blockdev copy ...passed 00:14:55.906 Suite: bdevio tests on: nvme1n1 00:14:55.906 Test: blockdev write read block ...passed 00:14:55.906 Test: blockdev write zeroes read block ...passed 00:14:55.906 Test: blockdev write zeroes read no split ...passed 00:14:55.906 Test: blockdev write zeroes read split ...passed 00:14:55.906 Test: blockdev write zeroes read split partial ...passed 00:14:55.906 Test: blockdev reset ...passed 00:14:55.906 Test: blockdev write read 8 blocks ...passed 00:14:55.906 Test: blockdev write read size > 128k ...passed 00:14:55.906 Test: blockdev write read invalid size ...passed 00:14:55.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:55.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:55.906 Test: blockdev write read max offset ...passed 00:14:55.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:55.906 Test: blockdev writev readv 8 blocks ...passed 00:14:55.906 Test: blockdev writev readv 30 x 1block ...passed 00:14:55.906 Test: blockdev writev readv block ...passed 00:14:56.164 Test: blockdev writev readv size > 128k ...passed 00:14:56.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:56.165 Test: blockdev comparev and writev ...passed 00:14:56.165 Test: blockdev nvme passthru rw ...passed 00:14:56.165 Test: blockdev nvme passthru vendor specific ...passed 00:14:56.165 Test: blockdev nvme admin passthru ...passed 00:14:56.165 Test: blockdev copy ...passed 00:14:56.165 Suite: bdevio tests on: nvme0n1 00:14:56.165 Test: blockdev write read block ...passed 00:14:56.165 Test: blockdev write zeroes read block ...passed 00:14:56.165 Test: blockdev write zeroes read no split ...passed 00:14:56.165 Test: blockdev write zeroes read split ...passed 00:14:56.165 Test: blockdev write zeroes read split partial ...passed 00:14:56.165 Test: blockdev reset ...passed 00:14:56.165 Test: blockdev write read 8 blocks ...passed 00:14:56.165 Test: blockdev write read size > 128k ...passed 00:14:56.165 Test: blockdev write read invalid size ...passed 00:14:56.165 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:56.165 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:56.165 Test: blockdev write read max offset ...passed 00:14:56.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:56.165 Test: blockdev writev readv 8 blocks ...passed 00:14:56.165 Test: blockdev writev readv 30 x 1block ...passed 00:14:56.165 Test: blockdev writev readv block ...passed 00:14:56.165 Test: blockdev writev readv size > 128k ...passed 00:14:56.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:56.165 Test: blockdev comparev and writev ...passed 00:14:56.165 Test: blockdev nvme passthru rw ...passed 00:14:56.165 Test: blockdev nvme passthru vendor specific ...passed 00:14:56.165 Test: blockdev nvme admin passthru ...passed 00:14:56.165 Test: blockdev copy ...passed 00:14:56.165 00:14:56.165 Run Summary: Type Total Ran Passed Failed Inactive 00:14:56.165 suites 6 6 n/a 0 0 00:14:56.165 tests 138 138 138 0 0 00:14:56.165 asserts 780 780 780 0 n/a 00:14:56.165 00:14:56.165 Elapsed time = 1.352 seconds 00:14:56.165 0 00:14:56.165 05:12:15 -- bdev/blockdev.sh@293 -- # killprocess 69032 00:14:56.165 05:12:15 -- common/autotest_common.sh@926 -- # '[' -z 69032 ']' 00:14:56.165 05:12:15 -- common/autotest_common.sh@930 -- # kill -0 69032 00:14:56.165 05:12:15 -- common/autotest_common.sh@931 -- # uname 00:14:56.165 05:12:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.165 05:12:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69032 00:14:56.165 05:12:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:56.165 05:12:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:56.165 05:12:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69032' 00:14:56.165 killing process with pid 69032 00:14:56.165 05:12:15 -- common/autotest_common.sh@945 -- # kill 69032 00:14:56.165 05:12:15 -- common/autotest_common.sh@950 -- # wait 69032 00:14:57.541 05:12:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:14:57.541 00:14:57.541 real 0m3.562s 00:14:57.541 user 0m8.821s 00:14:57.541 sys 0m0.485s 00:14:57.541 05:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.541 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 ************************************ 00:14:57.541 END TEST bdev_bounds 00:14:57.541 ************************************ 00:14:57.541 05:12:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:57.541 05:12:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:57.541 05:12:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.541 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 ************************************ 00:14:57.541 START TEST bdev_nbd 00:14:57.541 ************************************ 00:14:57.541 05:12:16 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:14:57.541 05:12:16 -- bdev/blockdev.sh@298 -- # uname -s 00:14:57.541 05:12:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:14:57.541 05:12:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:57.541 05:12:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:57.541 05:12:16 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:57.541 05:12:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:14:57.541 05:12:16 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:14:57.541 05:12:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:14:57.541 05:12:16 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:57.541 05:12:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:14:57.541 05:12:16 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:14:57.541 05:12:16 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:57.541 05:12:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:14:57.541 05:12:16 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:57.541 05:12:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:14:57.541 05:12:16 -- bdev/blockdev.sh@316 -- # nbd_pid=69113 00:14:57.541 05:12:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:57.541 05:12:16 -- bdev/blockdev.sh@318 -- # waitforlisten 69113 /var/tmp/spdk-nbd.sock 00:14:57.541 05:12:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:57.541 05:12:16 -- common/autotest_common.sh@819 -- # '[' -z 69113 ']' 00:14:57.541 05:12:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:57.541 05:12:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.541 05:12:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:57.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:57.541 05:12:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.541 05:12:16 -- common/autotest_common.sh@10 -- # set +x 00:14:57.541 [2024-07-26 05:12:16.556907] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:57.541 [2024-07-26 05:12:16.557273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.800 [2024-07-26 05:12:16.721390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.058 [2024-07-26 05:12:16.943625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.625 05:12:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.625 05:12:17 -- common/autotest_common.sh@852 -- # return 0 00:14:58.625 05:12:17 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@24 -- # local i 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:58.625 05:12:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:58.626 05:12:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:58.626 05:12:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:58.626 05:12:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:58.626 05:12:17 -- common/autotest_common.sh@857 -- # local i 00:14:58.626 05:12:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:58.626 05:12:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:58.626 05:12:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:58.626 05:12:17 -- common/autotest_common.sh@861 -- # break 00:14:58.626 05:12:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:58.626 05:12:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:58.626 05:12:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.626 1+0 records in 00:14:58.626 1+0 records out 00:14:58.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000884063 s, 4.6 MB/s 00:14:58.626 05:12:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.626 05:12:17 -- common/autotest_common.sh@874 -- # size=4096 00:14:58.626 05:12:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.626 05:12:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:58.626 05:12:17 -- common/autotest_common.sh@877 -- # return 0 00:14:58.626 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:58.626 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.626 05:12:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:58.884 05:12:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:58.884 05:12:17 -- common/autotest_common.sh@857 -- # local i 00:14:58.884 05:12:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:58.884 05:12:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:58.884 05:12:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:58.884 05:12:17 -- common/autotest_common.sh@861 -- # break 00:14:58.884 05:12:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:58.884 05:12:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:58.884 05:12:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:58.884 1+0 records in 00:14:58.884 1+0 records out 00:14:58.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597939 s, 6.9 MB/s 00:14:58.884 05:12:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.884 05:12:17 -- common/autotest_common.sh@874 -- # size=4096 00:14:58.884 05:12:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:58.884 05:12:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:58.884 05:12:17 -- common/autotest_common.sh@877 -- # return 0 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:58.884 05:12:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:59.143 05:12:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:14:59.143 05:12:18 -- common/autotest_common.sh@857 -- # local i 00:14:59.143 05:12:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.143 05:12:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.143 05:12:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:14:59.143 05:12:18 -- common/autotest_common.sh@861 -- # break 00:14:59.143 05:12:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.143 05:12:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.143 05:12:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.143 1+0 records in 00:14:59.143 1+0 records out 00:14:59.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519126 s, 7.9 MB/s 00:14:59.143 05:12:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.143 05:12:18 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.143 05:12:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.143 05:12:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.143 05:12:18 -- common/autotest_common.sh@877 -- # return 0 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.143 05:12:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:14:59.440 05:12:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:59.440 05:12:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:59.440 05:12:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:59.440 05:12:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:14:59.440 05:12:18 -- common/autotest_common.sh@857 -- # local i 00:14:59.440 05:12:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.440 05:12:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.440 05:12:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:14:59.440 05:12:18 -- common/autotest_common.sh@861 -- # break 00:14:59.440 05:12:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.440 05:12:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.440 05:12:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.440 1+0 records in 00:14:59.440 1+0 records out 00:14:59.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698153 s, 5.9 MB/s 00:14:59.440 05:12:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.440 05:12:18 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.440 05:12:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.440 05:12:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.440 05:12:18 -- common/autotest_common.sh@877 -- # return 0 00:14:59.440 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.441 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.441 05:12:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:59.698 05:12:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:14:59.698 05:12:18 -- common/autotest_common.sh@857 -- # local i 00:14:59.698 05:12:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.698 05:12:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.698 05:12:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:14:59.698 05:12:18 -- common/autotest_common.sh@861 -- # break 00:14:59.698 05:12:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.698 05:12:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.698 05:12:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.698 1+0 records in 00:14:59.698 1+0 records out 00:14:59.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780142 s, 5.3 MB/s 00:14:59.698 05:12:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.698 05:12:18 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.698 05:12:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.698 05:12:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.698 05:12:18 -- common/autotest_common.sh@877 -- # return 0 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.698 05:12:18 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:59.956 05:12:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:59.956 05:12:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:59.956 05:12:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:59.956 05:12:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:14:59.956 05:12:18 -- common/autotest_common.sh@857 -- # local i 00:14:59.956 05:12:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:59.956 05:12:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:59.956 05:12:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:14:59.956 05:12:18 -- common/autotest_common.sh@861 -- # break 00:14:59.956 05:12:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:59.956 05:12:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:59.956 05:12:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.956 1+0 records in 00:14:59.956 1+0 records out 00:14:59.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626412 s, 6.5 MB/s 00:14:59.956 05:12:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.956 05:12:18 -- common/autotest_common.sh@874 -- # size=4096 00:14:59.957 05:12:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.957 05:12:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:59.957 05:12:18 -- common/autotest_common.sh@877 -- # return 0 00:14:59.957 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:59.957 05:12:18 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:59.957 05:12:18 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd0", 00:15:00.215 "bdev_name": "nvme0n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd1", 00:15:00.215 "bdev_name": "nvme1n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd2", 00:15:00.215 "bdev_name": "nvme1n2" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd3", 00:15:00.215 "bdev_name": "nvme1n3" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd4", 00:15:00.215 "bdev_name": "nvme2n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd5", 00:15:00.215 "bdev_name": "nvme3n1" 00:15:00.215 } 00:15:00.215 ]' 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd0", 00:15:00.215 "bdev_name": "nvme0n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd1", 00:15:00.215 "bdev_name": "nvme1n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd2", 00:15:00.215 "bdev_name": "nvme1n2" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd3", 00:15:00.215 "bdev_name": "nvme1n3" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd4", 00:15:00.215 "bdev_name": "nvme2n1" 00:15:00.215 }, 00:15:00.215 { 00:15:00.215 "nbd_device": "/dev/nbd5", 00:15:00.215 "bdev_name": "nvme3n1" 00:15:00.215 } 00:15:00.215 ]' 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@51 -- # local i 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.215 05:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@41 -- # break 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.474 05:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@41 -- # break 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.733 05:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@41 -- # break 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.991 05:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@41 -- # break 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.991 05:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@41 -- # break 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.251 05:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@41 -- # break 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:01.511 05:12:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@65 -- # true 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@65 -- # count=0 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@122 -- # count=0 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@127 -- # return 0 00:15:01.771 05:12:20 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@12 -- # local i 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.771 05:12:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:02.031 /dev/nbd0 00:15:02.031 05:12:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:02.031 05:12:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:02.031 05:12:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:02.031 05:12:20 -- common/autotest_common.sh@857 -- # local i 00:15:02.031 05:12:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.031 05:12:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.031 05:12:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:02.031 05:12:20 -- common/autotest_common.sh@861 -- # break 00:15:02.031 05:12:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.031 05:12:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.031 05:12:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.031 1+0 records in 00:15:02.031 1+0 records out 00:15:02.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614101 s, 6.7 MB/s 00:15:02.031 05:12:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.031 05:12:20 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.031 05:12:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.031 05:12:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.031 05:12:20 -- common/autotest_common.sh@877 -- # return 0 00:15:02.031 05:12:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.031 05:12:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.031 05:12:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:02.290 /dev/nbd1 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:02.290 05:12:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:02.290 05:12:21 -- common/autotest_common.sh@857 -- # local i 00:15:02.290 05:12:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:02.290 05:12:21 -- common/autotest_common.sh@861 -- # break 00:15:02.290 05:12:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.290 1+0 records in 00:15:02.290 1+0 records out 00:15:02.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561752 s, 7.3 MB/s 00:15:02.290 05:12:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.290 05:12:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.290 05:12:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.290 05:12:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.290 05:12:21 -- common/autotest_common.sh@877 -- # return 0 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:15:02.290 /dev/nbd10 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:02.290 05:12:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:02.290 05:12:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:15:02.290 05:12:21 -- common/autotest_common.sh@857 -- # local i 00:15:02.290 05:12:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.290 05:12:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:15:02.290 05:12:21 -- common/autotest_common.sh@861 -- # break 00:15:02.549 05:12:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.549 05:12:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.549 05:12:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.549 1+0 records in 00:15:02.549 1+0 records out 00:15:02.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752693 s, 5.4 MB/s 00:15:02.549 05:12:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.549 05:12:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.549 05:12:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.549 05:12:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.549 05:12:21 -- common/autotest_common.sh@877 -- # return 0 00:15:02.549 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.549 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.549 05:12:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:15:02.810 /dev/nbd11 00:15:02.810 05:12:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:02.810 05:12:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:02.810 05:12:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:15:02.810 05:12:21 -- common/autotest_common.sh@857 -- # local i 00:15:02.810 05:12:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.810 05:12:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.810 05:12:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:15:02.810 05:12:21 -- common/autotest_common.sh@861 -- # break 00:15:02.810 05:12:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.810 05:12:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.810 05:12:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.810 1+0 records in 00:15:02.810 1+0 records out 00:15:02.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509425 s, 8.0 MB/s 00:15:02.810 05:12:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.810 05:12:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.810 05:12:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.810 05:12:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.811 05:12:21 -- common/autotest_common.sh@877 -- # return 0 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:02.811 /dev/nbd12 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:02.811 05:12:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:15:02.811 05:12:21 -- common/autotest_common.sh@857 -- # local i 00:15:02.811 05:12:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:02.811 05:12:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:02.811 05:12:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:15:02.811 05:12:21 -- common/autotest_common.sh@861 -- # break 00:15:02.811 05:12:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:02.811 05:12:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:02.811 05:12:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.811 1+0 records in 00:15:02.811 1+0 records out 00:15:02.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672055 s, 6.1 MB/s 00:15:02.811 05:12:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.811 05:12:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:02.811 05:12:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.811 05:12:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:02.811 05:12:21 -- common/autotest_common.sh@877 -- # return 0 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:02.811 05:12:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:03.097 /dev/nbd13 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:03.097 05:12:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:15:03.097 05:12:22 -- common/autotest_common.sh@857 -- # local i 00:15:03.097 05:12:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:03.097 05:12:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:03.097 05:12:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:15:03.097 05:12:22 -- common/autotest_common.sh@861 -- # break 00:15:03.097 05:12:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:03.097 05:12:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:03.097 05:12:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.097 1+0 records in 00:15:03.097 1+0 records out 00:15:03.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793236 s, 5.2 MB/s 00:15:03.097 05:12:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.097 05:12:22 -- common/autotest_common.sh@874 -- # size=4096 00:15:03.097 05:12:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.097 05:12:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:03.097 05:12:22 -- common/autotest_common.sh@877 -- # return 0 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:03.097 05:12:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:03.374 05:12:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd0", 00:15:03.374 "bdev_name": "nvme0n1" 00:15:03.374 }, 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd1", 00:15:03.374 "bdev_name": "nvme1n1" 00:15:03.374 }, 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd10", 00:15:03.374 "bdev_name": "nvme1n2" 00:15:03.374 }, 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd11", 00:15:03.374 "bdev_name": "nvme1n3" 00:15:03.374 }, 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd12", 00:15:03.374 "bdev_name": "nvme2n1" 00:15:03.374 }, 00:15:03.374 { 00:15:03.374 "nbd_device": "/dev/nbd13", 00:15:03.375 "bdev_name": "nvme3n1" 00:15:03.375 } 00:15:03.375 ]' 00:15:03.375 05:12:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd0", 00:15:03.375 "bdev_name": "nvme0n1" 00:15:03.375 }, 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd1", 00:15:03.375 "bdev_name": "nvme1n1" 00:15:03.375 }, 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd10", 00:15:03.375 "bdev_name": "nvme1n2" 00:15:03.375 }, 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd11", 00:15:03.375 "bdev_name": "nvme1n3" 00:15:03.375 }, 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd12", 00:15:03.375 "bdev_name": "nvme2n1" 00:15:03.375 }, 00:15:03.375 { 00:15:03.375 "nbd_device": "/dev/nbd13", 00:15:03.375 "bdev_name": "nvme3n1" 00:15:03.375 } 00:15:03.375 ]' 00:15:03.375 05:12:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:03.375 05:12:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:03.375 /dev/nbd1 00:15:03.375 /dev/nbd10 00:15:03.375 /dev/nbd11 00:15:03.375 /dev/nbd12 00:15:03.375 /dev/nbd13' 00:15:03.375 05:12:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:03.375 /dev/nbd1 00:15:03.375 /dev/nbd10 00:15:03.375 /dev/nbd11 00:15:03.375 /dev/nbd12 00:15:03.375 /dev/nbd13' 00:15:03.375 05:12:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@65 -- # count=6 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@66 -- # echo 6 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@95 -- # count=6 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:03.633 256+0 records in 00:15:03.633 256+0 records out 00:15:03.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113867 s, 92.1 MB/s 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:03.633 256+0 records in 00:15:03.633 256+0 records out 00:15:03.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120841 s, 8.7 MB/s 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.633 05:12:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:03.892 256+0 records in 00:15:03.892 256+0 records out 00:15:03.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124287 s, 8.4 MB/s 00:15:03.892 05:12:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.892 05:12:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:03.892 256+0 records in 00:15:03.892 256+0 records out 00:15:03.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122963 s, 8.5 MB/s 00:15:03.892 05:12:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:03.892 05:12:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:04.151 256+0 records in 00:15:04.151 256+0 records out 00:15:04.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122558 s, 8.6 MB/s 00:15:04.151 05:12:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:04.151 05:12:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:04.151 256+0 records in 00:15:04.151 256+0 records out 00:15:04.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143555 s, 7.3 MB/s 00:15:04.151 05:12:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:04.151 05:12:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:04.410 256+0 records in 00:15:04.410 256+0 records out 00:15:04.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120656 s, 8.7 MB/s 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@51 -- # local i 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.410 05:12:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@41 -- # break 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.669 05:12:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@41 -- # break 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.928 05:12:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@41 -- # break 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.188 05:12:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@41 -- # break 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.447 05:12:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@41 -- # break 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.706 05:12:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@41 -- # break 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.965 05:12:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@65 -- # true 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@65 -- # count=0 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@104 -- # count=0 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@109 -- # return 0 00:15:06.223 05:12:25 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:06.223 05:12:25 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:06.482 malloc_lvol_verify 00:15:06.482 05:12:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:06.739 c09fc1bf-d5cd-4c21-83ff-546f7effb39a 00:15:06.739 05:12:25 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:06.998 a079bf0c-8ef9-4f45-899d-84dbf63d1328 00:15:06.998 05:12:25 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:06.998 /dev/nbd0 00:15:06.998 05:12:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:06.998 mke2fs 1.46.5 (30-Dec-2021) 00:15:06.998 Discarding device blocks: 0/4096 done 00:15:06.998 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:06.998 00:15:06.999 Allocating group tables: 0/1 done 00:15:06.999 Writing inode tables: 0/1 done 00:15:06.999 Creating journal (1024 blocks): done 00:15:06.999 Writing superblocks and filesystem accounting information: 0/1 done 00:15:06.999 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@51 -- # local i 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.999 05:12:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@41 -- # break 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:07.258 05:12:26 -- bdev/nbd_common.sh@147 -- # return 0 00:15:07.258 05:12:26 -- bdev/blockdev.sh@324 -- # killprocess 69113 00:15:07.258 05:12:26 -- common/autotest_common.sh@926 -- # '[' -z 69113 ']' 00:15:07.258 05:12:26 -- common/autotest_common.sh@930 -- # kill -0 69113 00:15:07.258 05:12:26 -- common/autotest_common.sh@931 -- # uname 00:15:07.258 05:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:07.258 05:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69113 00:15:07.517 05:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:07.517 05:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:07.517 killing process with pid 69113 00:15:07.517 05:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69113' 00:15:07.517 05:12:26 -- common/autotest_common.sh@945 -- # kill 69113 00:15:07.517 05:12:26 -- common/autotest_common.sh@950 -- # wait 69113 00:15:08.894 05:12:27 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:15:08.894 00:15:08.894 real 0m11.192s 00:15:08.894 user 0m14.613s 00:15:08.894 sys 0m4.547s 00:15:08.894 05:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.894 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:08.894 ************************************ 00:15:08.894 END TEST bdev_nbd 00:15:08.894 ************************************ 00:15:08.894 05:12:27 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:15:08.894 05:12:27 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.894 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:08.894 ************************************ 00:15:08.894 START TEST bdev_fio 00:15:08.894 ************************************ 00:15:08.894 05:12:27 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@329 -- # local env_context 00:15:08.894 05:12:27 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:08.894 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:08.894 05:12:27 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:08.894 05:12:27 -- bdev/blockdev.sh@337 -- # echo '' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:15:08.894 05:12:27 -- bdev/blockdev.sh@337 -- # env_context= 00:15:08.894 05:12:27 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:08.894 05:12:27 -- common/autotest_common.sh@1260 -- # local workload=verify 00:15:08.894 05:12:27 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:15:08.894 05:12:27 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:08.894 05:12:27 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:08.894 05:12:27 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:08.894 05:12:27 -- common/autotest_common.sh@1280 -- # cat 00:15:08.894 05:12:27 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1293 -- # cat 00:15:08.894 05:12:27 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:15:08.894 05:12:27 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:15:08.894 05:12:27 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:08.894 05:12:27 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:15:08.894 05:12:27 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:15:08.894 05:12:27 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:15:08.894 05:12:27 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:15:08.894 05:12:27 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:08.895 05:12:27 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.895 05:12:27 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:15:08.895 05:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.895 05:12:27 -- common/autotest_common.sh@10 -- # set +x 00:15:08.895 ************************************ 00:15:08.895 START TEST bdev_fio_rw_verify 00:15:08.895 ************************************ 00:15:08.895 05:12:27 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.895 05:12:27 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.895 05:12:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:08.895 05:12:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.895 05:12:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:08.895 05:12:27 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.895 05:12:27 -- common/autotest_common.sh@1320 -- # shift 00:15:08.895 05:12:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:08.895 05:12:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.895 05:12:27 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.895 05:12:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:08.895 05:12:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:08.895 05:12:27 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.895 05:12:27 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.895 05:12:27 -- common/autotest_common.sh@1326 -- # break 00:15:08.895 05:12:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.895 05:12:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:09.154 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:09.154 fio-3.35 00:15:09.154 Starting 6 threads 00:15:21.355 00:15:21.355 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69523: Fri Jul 26 05:12:39 2024 00:15:21.355 read: IOPS=31.6k, BW=123MiB/s (129MB/s)(1235MiB/10001msec) 00:15:21.355 slat (usec): min=2, max=1832, avg= 6.07, stdev= 5.40 00:15:21.355 clat (usec): min=88, max=4456, avg=597.34, stdev=207.44 00:15:21.355 lat (usec): min=94, max=4460, avg=603.41, stdev=208.13 00:15:21.355 clat percentiles (usec): 00:15:21.355 | 50.000th=[ 627], 99.000th=[ 1090], 99.900th=[ 1565], 99.990th=[ 3720], 00:15:21.355 | 99.999th=[ 3949] 00:15:21.355 write: IOPS=31.9k, BW=125MiB/s (131MB/s)(1247MiB/10001msec); 0 zone resets 00:15:21.355 slat (usec): min=12, max=3516, avg=23.16, stdev=27.07 00:15:21.355 clat (usec): min=80, max=5688, avg=674.26, stdev=225.95 00:15:21.355 lat (usec): min=99, max=5717, avg=697.42, stdev=228.43 00:15:21.355 clat percentiles (usec): 00:15:21.355 | 50.000th=[ 685], 99.000th=[ 1287], 99.900th=[ 1942], 99.990th=[ 4178], 00:15:21.355 | 99.999th=[ 5604] 00:15:21.355 bw ( KiB/s): min=100304, max=158826, per=100.00%, avg=128551.00, stdev=2805.01, samples=114 00:15:21.355 iops : min=25076, max=39706, avg=32137.63, stdev=701.24, samples=114 00:15:21.355 lat (usec) : 100=0.01%, 250=3.51%, 500=22.30%, 750=46.74%, 1000=23.77% 00:15:21.355 lat (msec) : 2=3.60%, 4=0.07%, 10=0.01% 00:15:21.355 cpu : usr=58.52%, sys=27.99%, ctx=8332, majf=0, minf=28240 00:15:21.355 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:21.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.355 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:21.355 issued rwts: total=316159,319340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:21.355 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:21.355 00:15:21.355 Run status group 0 (all jobs): 00:15:21.355 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=1235MiB (1295MB), run=10001-10001msec 00:15:21.355 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1247MiB (1308MB), run=10001-10001msec 00:15:21.355 ----------------------------------------------------- 00:15:21.355 Suppressions used: 00:15:21.355 count bytes template 00:15:21.355 6 48 /usr/src/fio/parse.c 00:15:21.355 2966 284736 /usr/src/fio/iolog.c 00:15:21.355 1 8 libtcmalloc_minimal.so 00:15:21.355 1 904 libcrypto.so 00:15:21.355 ----------------------------------------------------- 00:15:21.355 00:15:21.614 00:15:21.615 real 0m12.652s 00:15:21.615 user 0m37.333s 00:15:21.615 sys 0m17.258s 00:15:21.615 05:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.615 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.615 ************************************ 00:15:21.615 END TEST bdev_fio_rw_verify 00:15:21.615 ************************************ 00:15:21.615 05:12:40 -- bdev/blockdev.sh@348 -- # rm -f 00:15:21.615 05:12:40 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.615 05:12:40 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.615 05:12:40 -- common/autotest_common.sh@1260 -- # local workload=trim 00:15:21.615 05:12:40 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:15:21.615 05:12:40 -- common/autotest_common.sh@1262 -- # local env_context= 00:15:21.615 05:12:40 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:15:21.615 05:12:40 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.615 05:12:40 -- common/autotest_common.sh@1280 -- # cat 00:15:21.615 05:12:40 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:15:21.615 05:12:40 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "42c3ede9-1061-4f8a-bd3f-3dfd00de9333"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "42c3ede9-1061-4f8a-bd3f-3dfd00de9333",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c9a79b89-0342-4354-afff-11aa8f0e3e99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c9a79b89-0342-4354-afff-11aa8f0e3e99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "47e6fe01-b4ab-47a1-95ea-f97dd88da531"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47e6fe01-b4ab-47a1-95ea-f97dd88da531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "d9027314-326b-4ed0-bbd1-ef7d156db897"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9027314-326b-4ed0-bbd1-ef7d156db897",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f679f66e-21b8-402f-963e-179968803b35"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f679f66e-21b8-402f-963e-179968803b35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0dea93fe-1e75-41f2-ae8d-67e00aa84fc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0dea93fe-1e75-41f2-ae8d-67e00aa84fc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:21.615 05:12:40 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:21.615 05:12:40 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:15:21.615 05:12:40 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:21.615 05:12:40 -- bdev/blockdev.sh@360 -- # popd 00:15:21.615 /home/vagrant/spdk_repo/spdk 00:15:21.615 05:12:40 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:15:21.615 05:12:40 -- bdev/blockdev.sh@362 -- # return 0 00:15:21.615 00:15:21.615 real 0m12.857s 00:15:21.615 user 0m37.435s 00:15:21.615 sys 0m17.351s 00:15:21.615 05:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.615 ************************************ 00:15:21.615 END TEST bdev_fio 00:15:21.615 ************************************ 00:15:21.615 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.615 05:12:40 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:21.615 05:12:40 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:21.615 05:12:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:21.615 05:12:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.615 ************************************ 00:15:21.615 START TEST bdev_verify 00:15:21.615 ************************************ 00:15:21.615 05:12:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:21.874 [2024-07-26 05:12:40.762011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:21.874 [2024-07-26 05:12:40.762179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:15:21.874 [2024-07-26 05:12:40.948709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:22.133 [2024-07-26 05:12:41.229692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.133 [2024-07-26 05:12:41.229714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.699 Running I/O for 5 seconds... 00:15:27.990 00:15:27.990 Latency(us) 00:15:27.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.990 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.990 Verification LBA range: start 0x0 length 0x20000 00:15:27.990 nvme0n1 : 5.07 2724.67 10.64 0.00 0.00 46759.46 11297.16 72401.68 00:15:27.990 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.990 Verification LBA range: start 0x20000 length 0x20000 00:15:27.990 nvme0n1 : 5.06 2633.76 10.29 0.00 0.00 48459.69 14605.17 62914.56 00:15:27.990 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.990 Verification LBA range: start 0x0 length 0x80000 00:15:27.990 nvme1n1 : 5.07 2579.05 10.07 0.00 0.00 49418.29 16103.13 80890.15 00:15:27.990 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.990 Verification LBA range: start 0x80000 length 0x80000 00:15:27.991 nvme1n1 : 5.06 2723.95 10.64 0.00 0.00 46794.68 5398.92 68906.42 00:15:27.991 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x0 length 0x80000 00:15:27.991 nvme1n2 : 5.06 2622.05 10.24 0.00 0.00 48623.77 10735.42 67907.78 00:15:27.991 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x80000 length 0x80000 00:15:27.991 nvme1n2 : 5.06 2679.37 10.47 0.00 0.00 47538.73 4088.20 68407.10 00:15:27.991 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x0 length 0x80000 00:15:27.991 nvme1n3 : 5.07 2732.48 10.67 0.00 0.00 46587.36 14293.09 70404.39 00:15:27.991 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x80000 length 0x80000 00:15:27.991 nvme1n3 : 5.06 2678.90 10.46 0.00 0.00 47444.74 12607.88 66409.81 00:15:27.991 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x0 length 0xbd0bd 00:15:27.991 nvme2n1 : 5.07 2999.41 11.72 0.00 0.00 42454.80 4088.20 69405.74 00:15:27.991 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:27.991 nvme2n1 : 5.06 3114.94 12.17 0.00 0.00 40779.18 4337.86 58919.98 00:15:27.991 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0x0 length 0xa0000 00:15:27.991 nvme3n1 : 5.08 2657.27 10.38 0.00 0.00 47867.15 6116.69 68407.10 00:15:27.991 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.991 Verification LBA range: start 0xa0000 length 0xa0000 00:15:27.991 nvme3n1 : 5.06 2570.37 10.04 0.00 0.00 49367.72 9299.87 68906.42 00:15:27.991 =================================================================================================================== 00:15:27.991 Total : 32716.22 127.80 0.00 0.00 46694.52 4088.20 80890.15 00:15:29.365 00:15:29.365 real 0m7.560s 00:15:29.365 user 0m9.691s 00:15:29.365 sys 0m3.542s 00:15:29.365 05:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.365 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.365 ************************************ 00:15:29.365 END TEST bdev_verify 00:15:29.365 ************************************ 00:15:29.365 05:12:48 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:29.365 05:12:48 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:15:29.365 05:12:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:29.365 05:12:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.365 ************************************ 00:15:29.365 START TEST bdev_verify_big_io 00:15:29.365 ************************************ 00:15:29.365 05:12:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:29.365 [2024-07-26 05:12:48.381621] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:29.365 [2024-07-26 05:12:48.381788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69804 ] 00:15:29.624 [2024-07-26 05:12:48.562364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:29.882 [2024-07-26 05:12:48.786852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.882 [2024-07-26 05:12:48.786884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.449 Running I/O for 5 seconds... 00:15:37.011 00:15:37.011 Latency(us) 00:15:37.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.011 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0x2000 00:15:37.011 nvme0n1 : 5.42 322.81 20.18 0.00 0.00 392868.32 22219.82 519294.78 00:15:37.011 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x2000 length 0x2000 00:15:37.011 nvme0n1 : 5.44 351.11 21.94 0.00 0.00 355984.90 25340.59 479349.03 00:15:37.011 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0x8000 00:15:37.011 nvme1n1 : 5.42 289.41 18.09 0.00 0.00 433755.85 21221.18 489335.47 00:15:37.011 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x8000 length 0x8000 00:15:37.011 nvme1n1 : 5.44 305.06 19.07 0.00 0.00 402099.73 24841.26 443397.85 00:15:37.011 Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0x8000 00:15:37.011 nvme1n2 : 5.43 321.94 20.12 0.00 0.00 385381.00 19473.55 485340.89 00:15:37.011 Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x8000 length 0x8000 00:15:37.011 nvme1n2 : 5.42 306.32 19.14 0.00 0.00 398293.34 102360.99 497324.62 00:15:37.011 Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0x8000 00:15:37.011 nvme1n3 : 5.42 322.29 20.14 0.00 0.00 381640.65 16602.45 475354.45 00:15:37.011 Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x8000 length 0x8000 00:15:37.011 nvme1n3 : 5.43 272.17 17.01 0.00 0.00 443176.74 127327.09 603180.86 00:15:37.011 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0xbd0b 00:15:37.011 nvme2n1 : 5.43 305.40 19.09 0.00 0.00 397733.56 23218.47 439403.28 00:15:37.011 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:37.011 nvme2n1 : 5.44 321.25 20.08 0.00 0.00 377489.47 16976.94 461373.44 00:15:37.011 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0x0 length 0xa000 00:15:37.011 nvme3n1 : 5.43 322.02 20.13 0.00 0.00 372843.99 15166.90 495327.33 00:15:37.011 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:37.011 Verification LBA range: start 0xa000 length 0xa000 00:15:37.011 nvme3n1 : 5.44 351.20 21.95 0.00 0.00 339668.86 6303.94 559240.53 00:15:37.011 =================================================================================================================== 00:15:37.011 Total : 3790.97 236.94 0.00 0.00 388217.78 6303.94 603180.86 00:15:37.579 00:15:37.579 real 0m8.166s 00:15:37.579 user 0m14.251s 00:15:37.579 sys 0m0.818s 00:15:37.579 05:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.579 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:37.579 ************************************ 00:15:37.579 END TEST bdev_verify_big_io 00:15:37.579 ************************************ 00:15:37.579 05:12:56 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:37.579 05:12:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:37.579 05:12:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.579 05:12:56 -- common/autotest_common.sh@10 -- # set +x 00:15:37.579 ************************************ 00:15:37.579 START TEST bdev_write_zeroes 00:15:37.579 ************************************ 00:15:37.579 05:12:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:37.579 [2024-07-26 05:12:56.605350] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:37.579 [2024-07-26 05:12:56.605513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69918 ] 00:15:37.837 [2024-07-26 05:12:56.788339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.096 [2024-07-26 05:12:57.016597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.665 Running I/O for 1 seconds... 00:15:39.599 00:15:39.599 Latency(us) 00:15:39.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.599 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme0n1 : 1.01 14779.70 57.73 0.00 0.00 8653.39 5492.54 13356.86 00:15:39.599 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme1n1 : 1.01 14843.79 57.98 0.00 0.00 8610.73 6428.77 12607.88 00:15:39.599 Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme1n2 : 1.01 14825.95 57.91 0.00 0.00 8615.48 6366.35 12795.12 00:15:39.599 Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme1n3 : 1.01 14809.06 57.85 0.00 0.00 8620.43 6366.35 13044.78 00:15:39.599 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme2n1 : 1.01 17788.35 69.49 0.00 0.00 7171.41 3432.84 10236.10 00:15:39.599 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:39.599 nvme3n1 : 1.01 14792.44 57.78 0.00 0.00 8586.54 5211.67 12795.12 00:15:39.599 =================================================================================================================== 00:15:39.599 Total : 91839.28 358.75 0.00 0.00 8338.10 3432.84 13356.86 00:15:40.976 00:15:40.976 real 0m3.280s 00:15:40.976 user 0m2.469s 00:15:40.976 sys 0m0.635s 00:15:40.976 05:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.976 05:12:59 -- common/autotest_common.sh@10 -- # set +x 00:15:40.976 ************************************ 00:15:40.976 END TEST bdev_write_zeroes 00:15:40.976 ************************************ 00:15:40.976 05:12:59 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.976 05:12:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:40.976 05:12:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.976 05:12:59 -- common/autotest_common.sh@10 -- # set +x 00:15:40.976 ************************************ 00:15:40.976 START TEST bdev_json_nonenclosed 00:15:40.976 ************************************ 00:15:40.976 05:12:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.976 [2024-07-26 05:12:59.947524] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:40.976 [2024-07-26 05:12:59.947677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69977 ] 00:15:41.234 [2024-07-26 05:13:00.128997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.493 [2024-07-26 05:13:00.351507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.493 [2024-07-26 05:13:00.351683] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:41.493 [2024-07-26 05:13:00.351706] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:41.751 00:15:41.751 real 0m0.936s 00:15:41.751 user 0m0.678s 00:15:41.751 sys 0m0.152s 00:15:41.751 05:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.751 05:13:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.751 ************************************ 00:15:41.751 END TEST bdev_json_nonenclosed 00:15:41.751 ************************************ 00:15:41.751 05:13:00 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.751 05:13:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:15:41.751 05:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:41.751 05:13:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.751 ************************************ 00:15:41.751 START TEST bdev_json_nonarray 00:15:41.751 ************************************ 00:15:41.751 05:13:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:42.010 [2024-07-26 05:13:00.916426] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:42.010 [2024-07-26 05:13:00.916551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70008 ] 00:15:42.010 [2024-07-26 05:13:01.074444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.269 [2024-07-26 05:13:01.296391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.269 [2024-07-26 05:13:01.296590] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:42.269 [2024-07-26 05:13:01.296613] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:42.844 00:15:42.844 real 0m0.876s 00:15:42.844 user 0m0.628s 00:15:42.844 sys 0m0.144s 00:15:42.844 05:13:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.844 ************************************ 00:15:42.844 END TEST bdev_json_nonarray 00:15:42.844 05:13:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 ************************************ 00:15:42.844 05:13:01 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:15:42.844 05:13:01 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:15:42.844 05:13:01 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:15:42.844 05:13:01 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:15:42.844 05:13:01 -- bdev/blockdev.sh@809 -- # cleanup 00:15:42.844 05:13:01 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:42.844 05:13:01 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.844 05:13:01 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:15:42.845 05:13:01 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:15:42.845 05:13:01 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:15:42.845 05:13:01 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:15:42.845 05:13:01 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:43.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.312 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.312 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.312 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.312 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.570 00:15:46.570 real 1m5.391s 00:15:46.570 user 1m42.154s 00:15:46.570 sys 0m36.022s 00:15:46.570 05:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.570 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.570 ************************************ 00:15:46.570 END TEST blockdev_xnvme 00:15:46.570 ************************************ 00:15:46.570 05:13:05 -- spdk/autotest.sh@259 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:46.570 05:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:46.570 05:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.570 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.570 ************************************ 00:15:46.570 START TEST ublk 00:15:46.570 ************************************ 00:15:46.570 05:13:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:46.570 * Looking for test storage... 00:15:46.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:46.570 05:13:05 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:46.570 05:13:05 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:46.570 05:13:05 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:46.570 05:13:05 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:46.570 05:13:05 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:46.570 05:13:05 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:46.570 05:13:05 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:46.570 05:13:05 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:46.570 05:13:05 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:46.570 05:13:05 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:46.570 05:13:05 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:46.570 05:13:05 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:46.570 05:13:05 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:46.571 05:13:05 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:46.571 05:13:05 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:46.571 05:13:05 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:46.571 05:13:05 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:46.571 05:13:05 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:46.571 05:13:05 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:46.571 05:13:05 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:46.571 05:13:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:46.571 05:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.571 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.571 ************************************ 00:15:46.571 START TEST test_save_ublk_config 00:15:46.571 ************************************ 00:15:46.571 05:13:05 -- common/autotest_common.sh@1104 -- # test_save_config 00:15:46.571 05:13:05 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:46.571 05:13:05 -- ublk/ublk.sh@103 -- # tgtpid=70318 00:15:46.571 05:13:05 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:46.571 05:13:05 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:46.571 05:13:05 -- ublk/ublk.sh@106 -- # waitforlisten 70318 00:15:46.571 05:13:05 -- common/autotest_common.sh@819 -- # '[' -z 70318 ']' 00:15:46.571 05:13:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.571 05:13:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.571 05:13:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.571 05:13:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.571 05:13:05 -- common/autotest_common.sh@10 -- # set +x 00:15:46.828 [2024-07-26 05:13:05.731604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:46.828 [2024-07-26 05:13:05.731758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:15:46.828 [2024-07-26 05:13:05.919735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.453 [2024-07-26 05:13:06.280949] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:47.453 [2024-07-26 05:13:06.281236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.388 05:13:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:48.388 05:13:07 -- common/autotest_common.sh@852 -- # return 0 00:15:48.388 05:13:07 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:48.388 05:13:07 -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:48.388 05:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.388 05:13:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.388 [2024-07-26 05:13:07.322647] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:48.388 malloc0 00:15:48.388 [2024-07-26 05:13:07.417361] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:48.388 [2024-07-26 05:13:07.417482] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:48.388 [2024-07-26 05:13:07.417493] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:48.388 [2024-07-26 05:13:07.417504] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.388 [2024-07-26 05:13:07.426313] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.388 [2024-07-26 05:13:07.426343] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.388 [2024-07-26 05:13:07.432259] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.388 [2024-07-26 05:13:07.432392] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:48.388 [2024-07-26 05:13:07.449245] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.388 0 00:15:48.388 05:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.388 05:13:07 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:48.388 05:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.388 05:13:07 -- common/autotest_common.sh@10 -- # set +x 00:15:48.648 05:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.648 05:13:07 -- ublk/ublk.sh@115 -- # config='{ 00:15:48.648 "subsystems": [ 00:15:48.648 { 00:15:48.648 "subsystem": "iobuf", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "iobuf_set_options", 00:15:48.648 "params": { 00:15:48.648 "small_pool_count": 8192, 00:15:48.648 "large_pool_count": 1024, 00:15:48.648 "small_bufsize": 8192, 00:15:48.648 "large_bufsize": 135168 00:15:48.648 } 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "sock", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "sock_impl_set_options", 00:15:48.648 "params": { 00:15:48.648 "impl_name": "posix", 00:15:48.648 "recv_buf_size": 2097152, 00:15:48.648 "send_buf_size": 2097152, 00:15:48.648 "enable_recv_pipe": true, 00:15:48.648 "enable_quickack": false, 00:15:48.648 "enable_placement_id": 0, 00:15:48.648 "enable_zerocopy_send_server": true, 00:15:48.648 "enable_zerocopy_send_client": false, 00:15:48.648 "zerocopy_threshold": 0, 00:15:48.648 "tls_version": 0, 00:15:48.648 "enable_ktls": false 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "sock_impl_set_options", 00:15:48.648 "params": { 00:15:48.648 "impl_name": "ssl", 00:15:48.648 "recv_buf_size": 4096, 00:15:48.648 "send_buf_size": 4096, 00:15:48.648 "enable_recv_pipe": true, 00:15:48.648 "enable_quickack": false, 00:15:48.648 "enable_placement_id": 0, 00:15:48.648 "enable_zerocopy_send_server": true, 00:15:48.648 "enable_zerocopy_send_client": false, 00:15:48.648 "zerocopy_threshold": 0, 00:15:48.648 "tls_version": 0, 00:15:48.648 "enable_ktls": false 00:15:48.648 } 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "vmd", 00:15:48.648 "config": [] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "accel", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "accel_set_options", 00:15:48.648 "params": { 00:15:48.648 "small_cache_size": 128, 00:15:48.648 "large_cache_size": 16, 00:15:48.648 "task_count": 2048, 00:15:48.648 "sequence_count": 2048, 00:15:48.648 "buf_count": 2048 00:15:48.648 } 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "bdev", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "bdev_set_options", 00:15:48.648 "params": { 00:15:48.648 "bdev_io_pool_size": 65535, 00:15:48.648 "bdev_io_cache_size": 256, 00:15:48.648 "bdev_auto_examine": true, 00:15:48.648 "iobuf_small_cache_size": 128, 00:15:48.648 "iobuf_large_cache_size": 16 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_raid_set_options", 00:15:48.648 "params": { 00:15:48.648 "process_window_size_kb": 1024 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_iscsi_set_options", 00:15:48.648 "params": { 00:15:48.648 "timeout_sec": 30 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_nvme_set_options", 00:15:48.648 "params": { 00:15:48.648 "action_on_timeout": "none", 00:15:48.648 "timeout_us": 0, 00:15:48.648 "timeout_admin_us": 0, 00:15:48.648 "keep_alive_timeout_ms": 10000, 00:15:48.648 "transport_retry_count": 4, 00:15:48.648 "arbitration_burst": 0, 00:15:48.648 "low_priority_weight": 0, 00:15:48.648 "medium_priority_weight": 0, 00:15:48.648 "high_priority_weight": 0, 00:15:48.648 "nvme_adminq_poll_period_us": 10000, 00:15:48.648 "nvme_ioq_poll_period_us": 0, 00:15:48.648 "io_queue_requests": 0, 00:15:48.648 "delay_cmd_submit": true, 00:15:48.648 "bdev_retry_count": 3, 00:15:48.648 "transport_ack_timeout": 0, 00:15:48.648 "ctrlr_loss_timeout_sec": 0, 00:15:48.648 "reconnect_delay_sec": 0, 00:15:48.648 "fast_io_fail_timeout_sec": 0, 00:15:48.648 "generate_uuids": false, 00:15:48.648 "transport_tos": 0, 00:15:48.648 "io_path_stat": false, 00:15:48.648 "allow_accel_sequence": false 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_nvme_set_hotplug", 00:15:48.648 "params": { 00:15:48.648 "period_us": 100000, 00:15:48.648 "enable": false 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_malloc_create", 00:15:48.648 "params": { 00:15:48.648 "name": "malloc0", 00:15:48.648 "num_blocks": 8192, 00:15:48.648 "block_size": 4096, 00:15:48.648 "physical_block_size": 4096, 00:15:48.648 "uuid": "a7d41780-fc60-4798-ada1-2a383db78add", 00:15:48.648 "optimal_io_boundary": 0 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "bdev_wait_for_examine" 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "scsi", 00:15:48.648 "config": null 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "scheduler", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "framework_set_scheduler", 00:15:48.648 "params": { 00:15:48.648 "name": "static" 00:15:48.648 } 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "vhost_scsi", 00:15:48.648 "config": [] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "vhost_blk", 00:15:48.648 "config": [] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "ublk", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "ublk_create_target", 00:15:48.648 "params": { 00:15:48.648 "cpumask": "1" 00:15:48.648 } 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "method": "ublk_start_disk", 00:15:48.648 "params": { 00:15:48.648 "bdev_name": "malloc0", 00:15:48.648 "ublk_id": 0, 00:15:48.648 "num_queues": 1, 00:15:48.648 "queue_depth": 128 00:15:48.648 } 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "nbd", 00:15:48.648 "config": [] 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "subsystem": "nvmf", 00:15:48.648 "config": [ 00:15:48.648 { 00:15:48.648 "method": "nvmf_set_config", 00:15:48.648 "params": { 00:15:48.649 "discovery_filter": "match_any", 00:15:48.649 "admin_cmd_passthru": { 00:15:48.649 "identify_ctrlr": false 00:15:48.649 } 00:15:48.649 } 00:15:48.649 }, 00:15:48.649 { 00:15:48.649 "method": "nvmf_set_max_subsystems", 00:15:48.649 "params": { 00:15:48.649 "max_subsystems": 1024 00:15:48.649 } 00:15:48.649 }, 00:15:48.649 { 00:15:48.649 "method": "nvmf_set_crdt", 00:15:48.649 "params": { 00:15:48.649 "crdt1": 0, 00:15:48.649 "crdt2": 0, 00:15:48.649 "crdt3": 0 00:15:48.649 } 00:15:48.649 } 00:15:48.649 ] 00:15:48.649 }, 00:15:48.649 { 00:15:48.649 "subsystem": "iscsi", 00:15:48.649 "config": [ 00:15:48.649 { 00:15:48.649 "method": "iscsi_set_options", 00:15:48.649 "params": { 00:15:48.649 "node_base": "iqn.2016-06.io.spdk", 00:15:48.649 "max_sessions": 128, 00:15:48.649 "max_connections_per_session": 2, 00:15:48.649 "max_queue_depth": 64, 00:15:48.649 "default_time2wait": 2, 00:15:48.649 "default_time2retain": 20, 00:15:48.649 "first_burst_length": 8192, 00:15:48.649 "immediate_data": true, 00:15:48.649 "allow_duplicated_isid": false, 00:15:48.649 "error_recovery_level": 0, 00:15:48.649 "nop_timeout": 60, 00:15:48.649 "nop_in_interval": 30, 00:15:48.649 "disable_chap": false, 00:15:48.649 "require_chap": false, 00:15:48.649 "mutual_chap": false, 00:15:48.649 "chap_group": 0, 00:15:48.649 "max_large_datain_per_connection": 64, 00:15:48.649 "max_r2t_per_connection": 4, 00:15:48.649 "pdu_pool_size": 36864, 00:15:48.649 "immediate_data_pool_size": 16384, 00:15:48.649 "data_out_pool_size": 2048 00:15:48.649 } 00:15:48.649 } 00:15:48.649 ] 00:15:48.649 } 00:15:48.649 ] 00:15:48.649 }' 00:15:48.649 05:13:07 -- ublk/ublk.sh@116 -- # killprocess 70318 00:15:48.649 05:13:07 -- common/autotest_common.sh@926 -- # '[' -z 70318 ']' 00:15:48.649 05:13:07 -- common/autotest_common.sh@930 -- # kill -0 70318 00:15:48.649 05:13:07 -- common/autotest_common.sh@931 -- # uname 00:15:48.649 05:13:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:48.649 05:13:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70318 00:15:48.649 05:13:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:48.649 killing process with pid 70318 00:15:48.649 05:13:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:48.649 05:13:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70318' 00:15:48.649 05:13:07 -- common/autotest_common.sh@945 -- # kill 70318 00:15:48.649 05:13:07 -- common/autotest_common.sh@950 -- # wait 70318 00:15:50.581 [2024-07-26 05:13:09.174585] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:50.581 [2024-07-26 05:13:09.205264] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:50.581 [2024-07-26 05:13:09.205409] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:50.581 [2024-07-26 05:13:09.213239] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:50.581 [2024-07-26 05:13:09.213293] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:50.581 [2024-07-26 05:13:09.213302] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:50.581 [2024-07-26 05:13:09.213333] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.581 [2024-07-26 05:13:09.213485] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:51.517 05:13:10 -- ublk/ublk.sh@119 -- # tgtpid=70390 00:15:51.517 05:13:10 -- ublk/ublk.sh@121 -- # waitforlisten 70390 00:15:51.517 05:13:10 -- common/autotest_common.sh@819 -- # '[' -z 70390 ']' 00:15:51.517 05:13:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.517 05:13:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:51.517 05:13:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.517 05:13:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:51.517 05:13:10 -- common/autotest_common.sh@10 -- # set +x 00:15:51.517 05:13:10 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:51.517 05:13:10 -- ublk/ublk.sh@118 -- # echo '{ 00:15:51.517 "subsystems": [ 00:15:51.517 { 00:15:51.517 "subsystem": "iobuf", 00:15:51.517 "config": [ 00:15:51.517 { 00:15:51.517 "method": "iobuf_set_options", 00:15:51.517 "params": { 00:15:51.517 "small_pool_count": 8192, 00:15:51.517 "large_pool_count": 1024, 00:15:51.517 "small_bufsize": 8192, 00:15:51.517 "large_bufsize": 135168 00:15:51.517 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "sock", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "sock_impl_set_options", 00:15:51.518 "params": { 00:15:51.518 "impl_name": "posix", 00:15:51.518 "recv_buf_size": 2097152, 00:15:51.518 "send_buf_size": 2097152, 00:15:51.518 "enable_recv_pipe": true, 00:15:51.518 "enable_quickack": false, 00:15:51.518 "enable_placement_id": 0, 00:15:51.518 "enable_zerocopy_send_server": true, 00:15:51.518 "enable_zerocopy_send_client": false, 00:15:51.518 "zerocopy_threshold": 0, 00:15:51.518 "tls_version": 0, 00:15:51.518 "enable_ktls": false 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "sock_impl_set_options", 00:15:51.518 "params": { 00:15:51.518 "impl_name": "ssl", 00:15:51.518 "recv_buf_size": 4096, 00:15:51.518 "send_buf_size": 4096, 00:15:51.518 "enable_recv_pipe": true, 00:15:51.518 "enable_quickack": false, 00:15:51.518 "enable_placement_id": 0, 00:15:51.518 "enable_zerocopy_send_server": true, 00:15:51.518 "enable_zerocopy_send_client": false, 00:15:51.518 "zerocopy_threshold": 0, 00:15:51.518 "tls_version": 0, 00:15:51.518 "enable_ktls": false 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "vmd", 00:15:51.518 "config": [] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "accel", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "accel_set_options", 00:15:51.518 "params": { 00:15:51.518 "small_cache_size": 128, 00:15:51.518 "large_cache_size": 16, 00:15:51.518 "task_count": 2048, 00:15:51.518 "sequence_count": 2048, 00:15:51.518 "buf_count": 2048 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "bdev", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "bdev_set_options", 00:15:51.518 "params": { 00:15:51.518 "bdev_io_pool_size": 65535, 00:15:51.518 "bdev_io_cache_size": 256, 00:15:51.518 "bdev_auto_examine": true, 00:15:51.518 "iobuf_small_cache_size": 128, 00:15:51.518 "iobuf_large_cache_size": 16 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_raid_set_options", 00:15:51.518 "params": { 00:15:51.518 "process_window_size_kb": 1024 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_iscsi_set_options", 00:15:51.518 "params": { 00:15:51.518 "timeout_sec": 30 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_nvme_set_options", 00:15:51.518 "params": { 00:15:51.518 "action_on_timeout": "none", 00:15:51.518 "timeout_us": 0, 00:15:51.518 "timeout_admin_us": 0, 00:15:51.518 "keep_alive_timeout_ms": 10000, 00:15:51.518 "transport_retry_count": 4, 00:15:51.518 "arbitration_burst": 0, 00:15:51.518 "low_priority_weight": 0, 00:15:51.518 "medium_priority_weight": 0, 00:15:51.518 "high_priority_weight": 0, 00:15:51.518 "nvme_adminq_poll_period_us": 10000, 00:15:51.518 "nvme_ioq_poll_period_us": 0, 00:15:51.518 "io_queue_requests": 0, 00:15:51.518 "delay_cmd_submit": true, 00:15:51.518 "bdev_retry_count": 3, 00:15:51.518 "transport_ack_timeout": 0, 00:15:51.518 "ctrlr_loss_timeout_sec": 0, 00:15:51.518 "reconnect_delay_sec": 0, 00:15:51.518 "fast_io_fail_timeout_sec": 0, 00:15:51.518 "generate_uuids": false, 00:15:51.518 "transport_tos": 0, 00:15:51.518 "io_path_stat": false, 00:15:51.518 "allow_accel_sequence": false 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_nvme_set_hotplug", 00:15:51.518 "params": { 00:15:51.518 "period_us": 100000, 00:15:51.518 "enable": false 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_malloc_create", 00:15:51.518 "params": { 00:15:51.518 "name": "malloc0", 00:15:51.518 "num_blocks": 8192, 00:15:51.518 "block_size": 4096, 00:15:51.518 "physical_block_size": 4096, 00:15:51.518 "uuid": "a7d41780-fc60-4798-ada1-2a383db78add", 00:15:51.518 "optimal_io_boundary": 0 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "bdev_wait_for_examine" 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "scsi", 00:15:51.518 "config": null 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "scheduler", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "framework_set_scheduler", 00:15:51.518 "params": { 00:15:51.518 "name": "static" 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "vhost_scsi", 00:15:51.518 "config": [] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "vhost_blk", 00:15:51.518 "config": [] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "ublk", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "ublk_create_target", 00:15:51.518 "params": { 00:15:51.518 "cpumask": "1" 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "ublk_start_disk", 00:15:51.518 "params": { 00:15:51.518 "bdev_name": "malloc0", 00:15:51.518 "ublk_id": 0, 00:15:51.518 "num_queues": 1, 00:15:51.518 "queue_depth": 128 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "nbd", 00:15:51.518 "config": [] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "nvmf", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "nvmf_set_config", 00:15:51.518 "params": { 00:15:51.518 "discovery_filter": "match_any", 00:15:51.518 "admin_cmd_passthru": { 00:15:51.518 "identify_ctrlr": false 00:15:51.518 } 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "nvmf_set_max_subsystems", 00:15:51.518 "params": { 00:15:51.518 "max_subsystems": 1024 00:15:51.518 } 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "method": "nvmf_set_crdt", 00:15:51.518 "params": { 00:15:51.518 "crdt1": 0, 00:15:51.518 "crdt2": 0, 00:15:51.518 "crdt3": 0 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }, 00:15:51.518 { 00:15:51.518 "subsystem": "iscsi", 00:15:51.518 "config": [ 00:15:51.518 { 00:15:51.518 "method": "iscsi_set_options", 00:15:51.518 "params": { 00:15:51.518 "node_base": "iqn.2016-06.io.spdk", 00:15:51.518 "max_sessions": 128, 00:15:51.518 "max_connections_per_session": 2, 00:15:51.518 "max_queue_depth": 64, 00:15:51.518 "default_time2wait": 2, 00:15:51.518 "default_time2retain": 20, 00:15:51.518 "first_burst_length": 8192, 00:15:51.518 "immediate_data": true, 00:15:51.518 "allow_duplicated_isid": false, 00:15:51.518 "error_recovery_level": 0, 00:15:51.518 "nop_timeout": 60, 00:15:51.518 "nop_in_interval": 30, 00:15:51.518 "disable_chap": false, 00:15:51.518 "require_chap": false, 00:15:51.518 "mutual_chap": false, 00:15:51.518 "chap_group": 0, 00:15:51.518 "max_large_datain_per_connection": 64, 00:15:51.518 "max_r2t_per_connection": 4, 00:15:51.518 "pdu_pool_size": 36864, 00:15:51.518 "immediate_data_pool_size": 16384, 00:15:51.518 "data_out_pool_size": 2048 00:15:51.518 } 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 } 00:15:51.518 ] 00:15:51.518 }' 00:15:51.777 [2024-07-26 05:13:10.719044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:51.777 [2024-07-26 05:13:10.719225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70390 ] 00:15:51.777 [2024-07-26 05:13:10.881056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.036 [2024-07-26 05:13:11.107588] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:52.036 [2024-07-26 05:13:11.107788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.413 [2024-07-26 05:13:12.180437] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.413 [2024-07-26 05:13:12.187334] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:53.413 [2024-07-26 05:13:12.187425] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:53.413 [2024-07-26 05:13:12.187435] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:53.413 [2024-07-26 05:13:12.187442] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:53.413 [2024-07-26 05:13:12.196292] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:53.413 [2024-07-26 05:13:12.196311] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:53.413 [2024-07-26 05:13:12.203238] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:53.413 [2024-07-26 05:13:12.203333] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:53.413 [2024-07-26 05:13:12.220240] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:53.413 05:13:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.413 05:13:12 -- common/autotest_common.sh@852 -- # return 0 00:15:53.413 05:13:12 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:53.413 05:13:12 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:53.413 05:13:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:53.413 05:13:12 -- common/autotest_common.sh@10 -- # set +x 00:15:53.413 05:13:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:53.413 05:13:12 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:53.413 05:13:12 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:53.413 05:13:12 -- ublk/ublk.sh@125 -- # killprocess 70390 00:15:53.413 05:13:12 -- common/autotest_common.sh@926 -- # '[' -z 70390 ']' 00:15:53.413 05:13:12 -- common/autotest_common.sh@930 -- # kill -0 70390 00:15:53.413 05:13:12 -- common/autotest_common.sh@931 -- # uname 00:15:53.413 05:13:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.413 05:13:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70390 00:15:53.413 05:13:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.413 05:13:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.413 killing process with pid 70390 00:15:53.413 05:13:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70390' 00:15:53.413 05:13:12 -- common/autotest_common.sh@945 -- # kill 70390 00:15:53.413 05:13:12 -- common/autotest_common.sh@950 -- # wait 70390 00:15:54.790 [2024-07-26 05:13:13.851292] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.790 [2024-07-26 05:13:13.896266] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.790 [2024-07-26 05:13:13.896428] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:55.048 [2024-07-26 05:13:13.904235] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:55.048 [2024-07-26 05:13:13.904281] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:55.048 [2024-07-26 05:13:13.904290] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:55.048 [2024-07-26 05:13:13.904322] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:55.048 [2024-07-26 05:13:13.904477] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:56.424 05:13:15 -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:56.424 00:15:56.424 real 0m9.671s 00:15:56.424 user 0m8.699s 00:15:56.424 sys 0m2.190s 00:15:56.424 ************************************ 00:15:56.424 END TEST test_save_ublk_config 00:15:56.424 ************************************ 00:15:56.424 05:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.424 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:56.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.424 05:13:15 -- ublk/ublk.sh@139 -- # spdk_pid=70477 00:15:56.424 05:13:15 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.424 05:13:15 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.424 05:13:15 -- ublk/ublk.sh@141 -- # waitforlisten 70477 00:15:56.424 05:13:15 -- common/autotest_common.sh@819 -- # '[' -z 70477 ']' 00:15:56.424 05:13:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.424 05:13:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.424 05:13:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.424 05:13:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.424 05:13:15 -- common/autotest_common.sh@10 -- # set +x 00:15:56.424 [2024-07-26 05:13:15.449355] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:56.425 [2024-07-26 05:13:15.449793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70477 ] 00:15:56.683 [2024-07-26 05:13:15.632575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.942 [2024-07-26 05:13:15.848360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:56.942 [2024-07-26 05:13:15.849101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.942 [2024-07-26 05:13:15.849135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.879 05:13:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.879 05:13:16 -- common/autotest_common.sh@852 -- # return 0 00:15:57.879 05:13:16 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:57.879 05:13:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:57.879 05:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.879 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:57.879 ************************************ 00:15:57.879 START TEST test_create_ublk 00:15:57.879 ************************************ 00:15:57.879 05:13:16 -- common/autotest_common.sh@1104 -- # test_create_ublk 00:15:57.879 05:13:16 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:57.879 05:13:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.879 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:57.879 [2024-07-26 05:13:16.976241] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:57.879 05:13:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:57.879 05:13:16 -- ublk/ublk.sh@33 -- # ublk_target= 00:15:57.879 05:13:16 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:57.879 05:13:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:57.879 05:13:16 -- common/autotest_common.sh@10 -- # set +x 00:15:58.446 05:13:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:58.446 05:13:17 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:58.446 05:13:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.446 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:15:58.446 [2024-07-26 05:13:17.313367] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:58.446 [2024-07-26 05:13:17.313804] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:58.446 [2024-07-26 05:13:17.313823] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:58.446 [2024-07-26 05:13:17.313835] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:58.446 [2024-07-26 05:13:17.321244] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:58.446 [2024-07-26 05:13:17.321274] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:58.446 [2024-07-26 05:13:17.329236] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:58.446 [2024-07-26 05:13:17.337426] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:58.446 [2024-07-26 05:13:17.354245] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:58.446 05:13:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:58.446 05:13:17 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:58.446 05:13:17 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:58.446 05:13:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.446 05:13:17 -- common/autotest_common.sh@10 -- # set +x 00:15:58.446 05:13:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:58.446 { 00:15:58.446 "ublk_device": "/dev/ublkb0", 00:15:58.446 "id": 0, 00:15:58.446 "queue_depth": 512, 00:15:58.446 "num_queues": 4, 00:15:58.446 "bdev_name": "Malloc0" 00:15:58.446 } 00:15:58.446 ]' 00:15:58.446 05:13:17 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:58.446 05:13:17 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:58.446 05:13:17 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:58.446 05:13:17 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:58.446 05:13:17 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:58.446 05:13:17 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:58.704 05:13:17 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:58.704 05:13:17 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:58.704 05:13:17 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:58.704 05:13:17 -- lvol/common.sh@41 -- # local offset=0 00:15:58.704 05:13:17 -- lvol/common.sh@42 -- # local size=134217728 00:15:58.704 05:13:17 -- lvol/common.sh@43 -- # local rw=write 00:15:58.704 05:13:17 -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:58.704 05:13:17 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:58.704 05:13:17 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:58.704 05:13:17 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:58.704 05:13:17 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.704 05:13:17 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.704 05:13:17 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:58.704 fio: verification read phase will never start because write phase uses all of runtime 00:15:58.704 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:58.704 fio-3.35 00:15:58.704 Starting 1 process 00:16:10.906 00:16:10.906 fio_test: (groupid=0, jobs=1): err= 0: pid=70532: Fri Jul 26 05:13:27 2024 00:16:10.906 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(617MiB/10001msec); 0 zone resets 00:16:10.906 clat (usec): min=38, max=4060, avg=62.54, stdev=102.06 00:16:10.906 lat (usec): min=39, max=4060, avg=62.97, stdev=102.07 00:16:10.906 clat percentiles (usec): 00:16:10.906 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 57], 00:16:10.906 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 59], 00:16:10.906 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 64], 95.00th=[ 69], 00:16:10.906 | 99.00th=[ 81], 99.50th=[ 85], 99.90th=[ 2147], 99.95th=[ 2900], 00:16:10.906 | 99.99th=[ 3687] 00:16:10.906 bw ( KiB/s): min=60616, max=70392, per=100.00%, avg=63319.58, stdev=1855.04, samples=19 00:16:10.906 iops : min=15154, max=17598, avg=15829.89, stdev=463.76, samples=19 00:16:10.906 lat (usec) : 50=3.50%, 100=96.29%, 250=0.02%, 500=0.01%, 750=0.01% 00:16:10.906 lat (usec) : 1000=0.01% 00:16:10.906 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:16:10.906 cpu : usr=3.11%, sys=9.37%, ctx=157933, majf=0, minf=796 00:16:10.906 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.906 issued rwts: total=0,157930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.906 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.906 00:16:10.906 Run status group 0 (all jobs): 00:16:10.906 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=617MiB (647MB), run=10001-10001msec 00:16:10.906 00:16:10.906 Disk stats (read/write): 00:16:10.906 ublkb0: ios=0/156462, merge=0/0, ticks=0/8725, in_queue=8726, util=99.11% 00:16:10.906 05:13:27 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:10.906 05:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.906 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:10.906 [2024-07-26 05:13:27.829489] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:10.906 [2024-07-26 05:13:27.871255] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:10.906 [2024-07-26 05:13:27.872085] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:10.906 [2024-07-26 05:13:27.874995] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:10.906 [2024-07-26 05:13:27.875366] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:10.906 05:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.906 05:13:27 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:10.906 05:13:27 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.906 05:13:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:10.906 05:13:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:10.906 [2024-07-26 05:13:27.878226] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:10.906 05:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.906 05:13:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:10.906 05:13:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.906 05:13:27 -- common/autotest_common.sh@643 -- # rpc_cmd ublk_stop_disk 0 00:16:10.906 05:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.906 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:10.906 [2024-07-26 05:13:27.893364] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:10.906 request: 00:16:10.906 { 00:16:10.906 "ublk_id": 0, 00:16:10.906 "method": "ublk_stop_disk", 00:16:10.906 "req_id": 1 00:16:10.906 } 00:16:10.906 Got JSON-RPC error response 00:16:10.906 response: 00:16:10.906 { 00:16:10.906 "code": -19, 00:16:10.906 "message": "No such device" 00:16:10.906 } 00:16:10.906 05:13:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:10.906 05:13:27 -- common/autotest_common.sh@643 -- # es=1 00:16:10.906 05:13:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:10.906 05:13:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:10.906 05:13:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:10.906 05:13:27 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:10.906 05:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.906 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:10.906 [2024-07-26 05:13:27.904317] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:10.906 [2024-07-26 05:13:27.912229] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:10.906 [2024-07-26 05:13:27.912264] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:10.906 05:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.906 05:13:27 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:10.906 05:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.906 05:13:27 -- common/autotest_common.sh@10 -- # set +x 00:16:10.906 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.906 05:13:28 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:10.906 05:13:28 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:10.906 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.906 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.906 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.906 05:13:28 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:10.906 05:13:28 -- lvol/common.sh@26 -- # jq length 00:16:10.906 05:13:28 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:10.906 05:13:28 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:10.907 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:28 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:10.907 05:13:28 -- lvol/common.sh@28 -- # jq length 00:16:10.907 ************************************ 00:16:10.907 END TEST test_create_ublk 00:16:10.907 ************************************ 00:16:10.907 05:13:28 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:10.907 00:16:10.907 real 0m11.426s 00:16:10.907 user 0m0.679s 00:16:10.907 sys 0m1.058s 00:16:10.907 05:13:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:28 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:10.907 05:13:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.907 05:13:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 ************************************ 00:16:10.907 START TEST test_create_multi_ublk 00:16:10.907 ************************************ 00:16:10.907 05:13:28 -- common/autotest_common.sh@1104 -- # test_create_multi_ublk 00:16:10.907 05:13:28 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:10.907 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 [2024-07-26 05:13:28.468078] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:10.907 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:28 -- ublk/ublk.sh@62 -- # ublk_target= 00:16:10.907 05:13:28 -- ublk/ublk.sh@64 -- # seq 0 3 00:16:10.907 05:13:28 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.907 05:13:28 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:10.907 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:28 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:10.907 05:13:28 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:10.907 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 [2024-07-26 05:13:28.799374] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:10.907 [2024-07-26 05:13:28.799816] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:10.907 [2024-07-26 05:13:28.799832] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:10.907 [2024-07-26 05:13:28.799843] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.907 [2024-07-26 05:13:28.808481] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.907 [2024-07-26 05:13:28.808601] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.907 [2024-07-26 05:13:28.815238] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.907 [2024-07-26 05:13:28.815796] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:10.907 [2024-07-26 05:13:28.832248] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.907 05:13:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:28 -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:10.907 05:13:28 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.907 05:13:28 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:10.907 05:13:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:28 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 [2024-07-26 05:13:29.183367] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:10.907 [2024-07-26 05:13:29.183818] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:10.907 [2024-07-26 05:13:29.183848] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:10.907 [2024-07-26 05:13:29.183856] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.907 [2024-07-26 05:13:29.192554] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.907 [2024-07-26 05:13:29.192578] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.907 [2024-07-26 05:13:29.199237] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.907 [2024-07-26 05:13:29.199819] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:10.907 [2024-07-26 05:13:29.208262] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:10.907 05:13:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.907 05:13:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 [2024-07-26 05:13:29.548402] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:10.907 [2024-07-26 05:13:29.548833] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:10.907 [2024-07-26 05:13:29.548845] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:10.907 [2024-07-26 05:13:29.548858] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.907 [2024-07-26 05:13:29.556294] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.907 [2024-07-26 05:13:29.556359] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.907 [2024-07-26 05:13:29.564262] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.907 [2024-07-26 05:13:29.564946] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:10.907 [2024-07-26 05:13:29.573231] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:10.907 05:13:29 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.907 05:13:29 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 [2024-07-26 05:13:29.915421] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:10.907 [2024-07-26 05:13:29.915873] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:10.907 [2024-07-26 05:13:29.915893] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:10.907 [2024-07-26 05:13:29.915902] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.907 [2024-07-26 05:13:29.923261] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.907 [2024-07-26 05:13:29.923285] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.907 [2024-07-26 05:13:29.931249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.907 [2024-07-26 05:13:29.931819] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:10.907 [2024-07-26 05:13:29.934976] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:10.907 05:13:29 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:10.907 05:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.907 05:13:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.907 05:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.907 05:13:29 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:10.907 { 00:16:10.907 "ublk_device": "/dev/ublkb0", 00:16:10.907 "id": 0, 00:16:10.907 "queue_depth": 512, 00:16:10.907 "num_queues": 4, 00:16:10.907 "bdev_name": "Malloc0" 00:16:10.907 }, 00:16:10.907 { 00:16:10.907 "ublk_device": "/dev/ublkb1", 00:16:10.907 "id": 1, 00:16:10.907 "queue_depth": 512, 00:16:10.907 "num_queues": 4, 00:16:10.907 "bdev_name": "Malloc1" 00:16:10.907 }, 00:16:10.907 { 00:16:10.907 "ublk_device": "/dev/ublkb2", 00:16:10.907 "id": 2, 00:16:10.907 "queue_depth": 512, 00:16:10.907 "num_queues": 4, 00:16:10.907 "bdev_name": "Malloc2" 00:16:10.907 }, 00:16:10.907 { 00:16:10.907 "ublk_device": "/dev/ublkb3", 00:16:10.907 "id": 3, 00:16:10.907 "queue_depth": 512, 00:16:10.907 "num_queues": 4, 00:16:10.907 "bdev_name": "Malloc3" 00:16:10.907 } 00:16:10.907 ]' 00:16:10.907 05:13:29 -- ublk/ublk.sh@72 -- # seq 0 3 00:16:10.907 05:13:29 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.907 05:13:29 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:11.166 05:13:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:11.166 05:13:30 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:11.166 05:13:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:11.166 05:13:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:11.166 05:13:30 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.166 05:13:30 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:11.166 05:13:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:11.166 05:13:30 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:11.425 05:13:30 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:11.425 05:13:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:11.425 05:13:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:11.425 05:13:30 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.425 05:13:30 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:11.425 05:13:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:11.425 05:13:30 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:11.425 05:13:30 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:11.683 05:13:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:11.683 05:13:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:11.683 05:13:30 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.683 05:13:30 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:11.683 05:13:30 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:11.683 05:13:30 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:11.683 05:13:30 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.683 05:13:30 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:11.941 05:13:30 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.941 05:13:30 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:11.941 05:13:30 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:11.941 05:13:30 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:11.941 05:13:30 -- ublk/ublk.sh@85 -- # seq 0 3 00:16:11.941 05:13:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.941 05:13:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:11.941 05:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.941 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:11.941 [2024-07-26 05:13:30.887344] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.941 [2024-07-26 05:13:30.934606] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.941 [2024-07-26 05:13:30.936070] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.941 [2024-07-26 05:13:30.941245] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.941 [2024-07-26 05:13:30.941543] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:11.941 [2024-07-26 05:13:30.941561] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:11.941 05:13:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.941 05:13:30 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.941 05:13:30 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:11.941 05:13:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.941 05:13:30 -- common/autotest_common.sh@10 -- # set +x 00:16:11.941 [2024-07-26 05:13:30.956346] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.942 [2024-07-26 05:13:31.004240] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.942 [2024-07-26 05:13:31.005485] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.942 [2024-07-26 05:13:31.013242] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.942 [2024-07-26 05:13:31.013553] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:11.942 [2024-07-26 05:13:31.013568] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:11.942 05:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.942 05:13:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.942 05:13:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:11.942 05:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.942 05:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.942 [2024-07-26 05:13:31.020317] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.200 [2024-07-26 05:13:31.053272] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.200 [2024-07-26 05:13:31.057571] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.200 [2024-07-26 05:13:31.065262] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.200 [2024-07-26 05:13:31.065554] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:12.200 [2024-07-26 05:13:31.065577] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:12.200 05:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.200 05:13:31 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.200 05:13:31 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:12.200 05:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.200 05:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:12.200 [2024-07-26 05:13:31.080319] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.200 [2024-07-26 05:13:31.122585] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.200 [2024-07-26 05:13:31.127562] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.201 [2024-07-26 05:13:31.137232] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.201 [2024-07-26 05:13:31.137550] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:12.201 [2024-07-26 05:13:31.137570] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:12.201 05:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.201 05:13:31 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:12.460 [2024-07-26 05:13:31.390378] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.460 [2024-07-26 05:13:31.396872] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:12.460 [2024-07-26 05:13:31.396917] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:12.460 05:13:31 -- ublk/ublk.sh@93 -- # seq 0 3 00:16:12.460 05:13:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.460 05:13:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:12.460 05:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.460 05:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:12.719 05:13:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.719 05:13:31 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.719 05:13:31 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:12.719 05:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.719 05:13:31 -- common/autotest_common.sh@10 -- # set +x 00:16:13.286 05:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.286 05:13:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:13.286 05:13:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:13.286 05:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.286 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:13.545 05:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.545 05:13:32 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:13.545 05:13:32 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:13.545 05:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.545 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:13.804 05:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.804 05:13:32 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:13.804 05:13:32 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:13.804 05:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.804 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:13.804 05:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:13.804 05:13:32 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:13.804 05:13:32 -- lvol/common.sh@26 -- # jq length 00:16:13.804 05:13:32 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:13.804 05:13:32 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:13.804 05:13:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:13.804 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:14.063 05:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:14.063 05:13:32 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:14.063 05:13:32 -- lvol/common.sh@28 -- # jq length 00:16:14.063 ************************************ 00:16:14.063 END TEST test_create_multi_ublk 00:16:14.063 ************************************ 00:16:14.063 05:13:32 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:14.063 00:16:14.063 real 0m4.510s 00:16:14.063 user 0m1.143s 00:16:14.063 sys 0m0.215s 00:16:14.063 05:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.063 05:13:32 -- common/autotest_common.sh@10 -- # set +x 00:16:14.063 05:13:33 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:14.063 05:13:33 -- ublk/ublk.sh@147 -- # cleanup 00:16:14.063 05:13:33 -- ublk/ublk.sh@130 -- # killprocess 70477 00:16:14.063 05:13:33 -- common/autotest_common.sh@926 -- # '[' -z 70477 ']' 00:16:14.063 05:13:33 -- common/autotest_common.sh@930 -- # kill -0 70477 00:16:14.063 05:13:33 -- common/autotest_common.sh@931 -- # uname 00:16:14.063 05:13:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:14.063 05:13:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70477 00:16:14.063 killing process with pid 70477 00:16:14.063 05:13:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:14.063 05:13:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:14.063 05:13:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70477' 00:16:14.063 05:13:33 -- common/autotest_common.sh@945 -- # kill 70477 00:16:14.063 05:13:33 -- common/autotest_common.sh@950 -- # wait 70477 00:16:15.440 [2024-07-26 05:13:34.164714] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:15.440 [2024-07-26 05:13:34.164766] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:16.376 00:16:16.376 real 0m29.930s 00:16:16.376 user 0m45.243s 00:16:16.376 sys 0m8.581s 00:16:16.376 05:13:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.376 ************************************ 00:16:16.376 END TEST ublk 00:16:16.376 ************************************ 00:16:16.376 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.376 05:13:35 -- spdk/autotest.sh@260 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:16.376 05:13:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:16.376 05:13:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.376 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.376 ************************************ 00:16:16.376 START TEST ublk_recovery 00:16:16.376 ************************************ 00:16:16.377 05:13:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:16.636 * Looking for test storage... 00:16:16.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:16.636 05:13:35 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:16.636 05:13:35 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:16.636 05:13:35 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:16.636 05:13:35 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:16.636 05:13:35 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:16.636 05:13:35 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:16.636 05:13:35 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:16.636 05:13:35 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:16.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=70871 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 70871 00:16:16.636 05:13:35 -- common/autotest_common.sh@819 -- # '[' -z 70871 ']' 00:16:16.636 05:13:35 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:16.636 05:13:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.636 05:13:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.636 05:13:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.636 05:13:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.636 05:13:35 -- common/autotest_common.sh@10 -- # set +x 00:16:16.636 [2024-07-26 05:13:35.711392] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:16.636 [2024-07-26 05:13:35.711541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70871 ] 00:16:16.895 [2024-07-26 05:13:35.892409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.154 [2024-07-26 05:13:36.111725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.154 [2024-07-26 05:13:36.112299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.154 [2024-07-26 05:13:36.112349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.533 05:13:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.533 05:13:37 -- common/autotest_common.sh@852 -- # return 0 00:16:18.533 05:13:37 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:18.533 05:13:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.533 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.533 [2024-07-26 05:13:37.235356] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.533 05:13:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.533 05:13:37 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:18.533 05:13:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.533 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.533 malloc0 00:16:18.533 05:13:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.533 05:13:37 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:18.533 05:13:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.533 05:13:37 -- common/autotest_common.sh@10 -- # set +x 00:16:18.533 [2024-07-26 05:13:37.415357] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:18.533 [2024-07-26 05:13:37.415476] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:18.533 [2024-07-26 05:13:37.415487] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:18.533 [2024-07-26 05:13:37.415498] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.533 [2024-07-26 05:13:37.423252] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.533 [2024-07-26 05:13:37.423289] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.533 [2024-07-26 05:13:37.431233] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.533 [2024-07-26 05:13:37.431384] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:18.533 [2024-07-26 05:13:37.454240] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.533 1 00:16:18.533 05:13:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.533 05:13:37 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:19.469 05:13:38 -- ublk/ublk_recovery.sh@31 -- # fio_proc=70919 00:16:19.469 05:13:38 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:19.469 05:13:38 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:19.728 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.728 fio-3.35 00:16:19.728 Starting 1 process 00:16:25.000 05:13:43 -- ublk/ublk_recovery.sh@36 -- # kill -9 70871 00:16:25.000 05:13:43 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:30.268 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 70871 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:30.268 05:13:48 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71031 00:16:30.268 05:13:48 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.268 05:13:48 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71031 00:16:30.268 05:13:48 -- common/autotest_common.sh@819 -- # '[' -z 71031 ']' 00:16:30.268 05:13:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.268 05:13:48 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:30.268 05:13:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.268 05:13:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.268 05:13:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.268 05:13:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.268 [2024-07-26 05:13:48.604000] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:30.268 [2024-07-26 05:13:48.604129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:16:30.268 [2024-07-26 05:13:48.765870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.268 [2024-07-26 05:13:48.991352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.268 [2024-07-26 05:13:48.991746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.268 [2024-07-26 05:13:48.991794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.204 05:13:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.204 05:13:50 -- common/autotest_common.sh@852 -- # return 0 00:16:31.204 05:13:50 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:31.204 05:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.204 05:13:50 -- common/autotest_common.sh@10 -- # set +x 00:16:31.204 [2024-07-26 05:13:50.104220] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:31.204 05:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.204 05:13:50 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:31.204 05:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.204 05:13:50 -- common/autotest_common.sh@10 -- # set +x 00:16:31.204 malloc0 00:16:31.204 05:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.204 05:13:50 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:31.204 05:13:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.204 05:13:50 -- common/autotest_common.sh@10 -- # set +x 00:16:31.204 [2024-07-26 05:13:50.285369] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:31.204 [2024-07-26 05:13:50.285419] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:31.204 [2024-07-26 05:13:50.285429] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:31.204 [2024-07-26 05:13:50.293274] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:31.204 [2024-07-26 05:13:50.293299] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:31.204 1 00:16:31.204 [2024-07-26 05:13:50.293392] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:31.204 05:13:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.204 05:13:50 -- ublk/ublk_recovery.sh@52 -- # wait 70919 00:16:31.204 [2024-07-26 05:13:50.301240] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:31.204 [2024-07-26 05:13:50.307804] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:31.462 [2024-07-26 05:13:50.315416] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:31.462 [2024-07-26 05:13:50.315447] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:27.716 00:17:27.716 fio_test: (groupid=0, jobs=1): err= 0: pid=70922: Fri Jul 26 05:14:38 2024 00:17:27.716 read: IOPS=22.8k, BW=88.9MiB/s (93.2MB/s)(5335MiB/60002msec) 00:17:27.716 slat (nsec): min=1856, max=554440, avg=5913.55, stdev=1634.61 00:17:27.716 clat (usec): min=880, max=6851.3k, avg=2739.56, stdev=45387.66 00:17:27.716 lat (usec): min=886, max=6851.3k, avg=2745.47, stdev=45387.66 00:17:27.716 clat percentiles (usec): 00:17:27.716 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2245], 00:17:27.716 | 30.00th=[ 2278], 40.00th=[ 2278], 50.00th=[ 2311], 60.00th=[ 2343], 00:17:27.716 | 70.00th=[ 2343], 80.00th=[ 2376], 90.00th=[ 2573], 95.00th=[ 3490], 00:17:27.716 | 99.00th=[ 4948], 99.50th=[ 5538], 99.90th=[ 6783], 99.95th=[ 7570], 00:17:27.716 | 99.99th=[12518] 00:17:27.716 bw ( KiB/s): min=38936, max=106280, per=100.00%, avg=102244.25, stdev=9277.00, samples=106 00:17:27.716 iops : min= 9734, max=26570, avg=25561.06, stdev=2319.25, samples=106 00:17:27.716 write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(5329MiB/60002msec); 0 zone resets 00:17:27.716 slat (nsec): min=1932, max=261992, avg=5885.57, stdev=1546.20 00:17:27.716 clat (usec): min=738, max=6851.1k, avg=2874.17, stdev=48347.79 00:17:27.716 lat (usec): min=742, max=6851.1k, avg=2880.06, stdev=48347.79 00:17:27.716 clat percentiles (usec): 00:17:27.716 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2343], 00:17:27.716 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2442], 00:17:27.716 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2606], 95.00th=[ 3425], 00:17:27.716 | 99.00th=[ 4948], 99.50th=[ 5669], 99.90th=[ 6915], 99.95th=[ 7701], 00:17:27.716 | 99.99th=[12518] 00:17:27.716 bw ( KiB/s): min=39992, max=106528, per=100.00%, avg=102105.65, stdev=9108.46, samples=106 00:17:27.716 iops : min= 9998, max=26632, avg=25526.41, stdev=2277.11, samples=106 00:17:27.716 lat (usec) : 750=0.01%, 1000=0.01% 00:17:27.716 lat (msec) : 2=1.24%, 4=95.46%, 10=3.28%, 20=0.01%, >=2000=0.01% 00:17:27.716 cpu : usr=9.71%, sys=26.30%, ctx=93641, majf=0, minf=14 00:17:27.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:27.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.716 issued rwts: total=1365805,1364205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.716 00:17:27.716 Run status group 0 (all jobs): 00:17:27.716 READ: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=5335MiB (5594MB), run=60002-60002msec 00:17:27.716 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=5329MiB (5588MB), run=60002-60002msec 00:17:27.716 00:17:27.716 Disk stats (read/write): 00:17:27.716 ublkb1: ios=1362854/1361264, merge=0/0, ticks=3639073/3681745, in_queue=7320819, util=99.94% 00:17:27.716 05:14:38 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:27.716 05:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.716 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:27.716 [2024-07-26 05:14:38.739228] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:27.716 [2024-07-26 05:14:38.779338] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:27.716 [2024-07-26 05:14:38.779542] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:27.716 [2024-07-26 05:14:38.787268] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:27.716 [2024-07-26 05:14:38.787375] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:27.716 [2024-07-26 05:14:38.787388] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:27.716 05:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.716 05:14:38 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:27.716 05:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:27.716 05:14:38 -- common/autotest_common.sh@10 -- # set +x 00:17:27.716 [2024-07-26 05:14:38.803304] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:27.716 [2024-07-26 05:14:38.811252] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:27.716 [2024-07-26 05:14:38.811291] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:27.716 05:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:27.716 05:14:38 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:27.716 05:14:38 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:27.716 05:14:38 -- ublk/ublk_recovery.sh@14 -- # killprocess 71031 00:17:27.716 05:14:38 -- common/autotest_common.sh@926 -- # '[' -z 71031 ']' 00:17:27.716 05:14:38 -- common/autotest_common.sh@930 -- # kill -0 71031 00:17:27.716 05:14:38 -- common/autotest_common.sh@931 -- # uname 00:17:27.716 05:14:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.716 05:14:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71031 00:17:27.716 killing process with pid 71031 00:17:27.716 05:14:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:27.716 05:14:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:27.716 05:14:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71031' 00:17:27.716 05:14:38 -- common/autotest_common.sh@945 -- # kill 71031 00:17:27.716 05:14:38 -- common/autotest_common.sh@950 -- # wait 71031 00:17:27.716 [2024-07-26 05:14:40.009769] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:27.716 [2024-07-26 05:14:40.009841] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:27.716 00:17:27.716 real 1m6.000s 00:17:27.716 user 1m50.503s 00:17:27.716 sys 0m32.315s 00:17:27.716 05:14:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.716 ************************************ 00:17:27.716 END TEST ublk_recovery 00:17:27.716 ************************************ 00:17:27.716 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.716 05:14:41 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@268 -- # timing_exit lib 00:17:27.716 05:14:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:27.716 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.716 05:14:41 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:27.716 05:14:41 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:27.716 05:14:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:27.716 05:14:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:27.716 05:14:41 -- common/autotest_common.sh@10 -- # set +x 00:17:27.716 ************************************ 00:17:27.716 START TEST ftl 00:17:27.716 ************************************ 00:17:27.716 05:14:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:27.716 * Looking for test storage... 00:17:27.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:27.716 05:14:41 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:27.716 05:14:41 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:27.716 05:14:41 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:27.716 05:14:41 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:27.716 05:14:41 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:27.716 05:14:41 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:27.716 05:14:41 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.716 05:14:41 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:27.716 05:14:41 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:27.716 05:14:41 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:27.716 05:14:41 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:27.716 05:14:41 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:27.716 05:14:41 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:27.716 05:14:41 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:27.716 05:14:41 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:27.716 05:14:41 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:27.716 05:14:41 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:27.716 05:14:41 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:27.716 05:14:41 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:27.716 05:14:41 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:27.716 05:14:41 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:27.716 05:14:41 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:27.716 05:14:41 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:27.717 05:14:41 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:27.717 05:14:41 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:27.717 05:14:41 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:27.717 05:14:41 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:27.717 05:14:41 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:27.717 05:14:41 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:27.717 05:14:41 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:27.717 05:14:41 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:27.717 05:14:41 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:27.717 05:14:41 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:27.717 05:14:41 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:27.717 05:14:41 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:27.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:27.717 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:27.717 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:27.717 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:27.717 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:27.717 05:14:42 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=71830 00:17:27.717 05:14:42 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:27.717 05:14:42 -- ftl/ftl.sh@38 -- # waitforlisten 71830 00:17:27.717 05:14:42 -- common/autotest_common.sh@819 -- # '[' -z 71830 ']' 00:17:27.717 05:14:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.717 05:14:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.717 05:14:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.717 05:14:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.717 05:14:42 -- common/autotest_common.sh@10 -- # set +x 00:17:27.717 [2024-07-26 05:14:42.472671] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:27.717 [2024-07-26 05:14:42.472827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:17:27.717 [2024-07-26 05:14:42.660194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.717 [2024-07-26 05:14:42.979049] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:27.717 [2024-07-26 05:14:42.979236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.717 05:14:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:27.717 05:14:43 -- common/autotest_common.sh@852 -- # return 0 00:17:27.717 05:14:43 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:27.717 05:14:43 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:27.717 05:14:44 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:27.717 05:14:44 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:27.717 05:14:45 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:27.717 05:14:45 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:27.717 05:14:45 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:27.717 05:14:45 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:17:27.717 05:14:45 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:27.717 05:14:45 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:17:27.717 05:14:45 -- ftl/ftl.sh@50 -- # break 00:17:27.717 05:14:45 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:17:27.717 05:14:45 -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:27.717 05:14:45 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:27.717 05:14:45 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:27.717 05:14:45 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:17:27.717 05:14:45 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:27.717 05:14:45 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:17:27.717 05:14:45 -- ftl/ftl.sh@63 -- # break 00:17:27.717 05:14:45 -- ftl/ftl.sh@66 -- # killprocess 71830 00:17:27.717 05:14:45 -- common/autotest_common.sh@926 -- # '[' -z 71830 ']' 00:17:27.717 05:14:45 -- common/autotest_common.sh@930 -- # kill -0 71830 00:17:27.717 05:14:45 -- common/autotest_common.sh@931 -- # uname 00:17:27.717 05:14:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:27.717 05:14:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71830 00:17:27.717 05:14:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:27.717 killing process with pid 71830 00:17:27.717 05:14:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:27.717 05:14:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71830' 00:17:27.717 05:14:45 -- common/autotest_common.sh@945 -- # kill 71830 00:17:27.717 05:14:45 -- common/autotest_common.sh@950 -- # wait 71830 00:17:29.094 05:14:47 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:17:29.094 05:14:47 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:17:29.094 05:14:47 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:29.094 05:14:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:29.094 05:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:29.094 05:14:47 -- common/autotest_common.sh@10 -- # set +x 00:17:29.094 ************************************ 00:17:29.094 START TEST ftl_fio_basic 00:17:29.094 ************************************ 00:17:29.094 05:14:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:17:29.094 * Looking for test storage... 00:17:29.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:29.094 05:14:48 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:29.094 05:14:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:29.094 05:14:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:29.094 05:14:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:29.094 05:14:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:29.094 05:14:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:29.094 05:14:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.094 05:14:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:29.094 05:14:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:29.094 05:14:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.094 05:14:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.094 05:14:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:29.094 05:14:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:29.094 05:14:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:29.094 05:14:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:29.094 05:14:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:29.094 05:14:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:29.094 05:14:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.094 05:14:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:29.094 05:14:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:29.094 05:14:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:29.094 05:14:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:29.094 05:14:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:29.094 05:14:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:29.094 05:14:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:29.094 05:14:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:29.094 05:14:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:29.094 05:14:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:29.094 05:14:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:29.094 05:14:48 -- ftl/fio.sh@11 -- # declare -A suite 00:17:29.094 05:14:48 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:29.094 05:14:48 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:29.094 05:14:48 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:29.094 05:14:48 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:29.094 05:14:48 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:17:29.094 05:14:48 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:17:29.094 05:14:48 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:29.094 05:14:48 -- ftl/fio.sh@26 -- # uuid= 00:17:29.094 05:14:48 -- ftl/fio.sh@27 -- # timeout=240 00:17:29.094 05:14:48 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:29.094 05:14:48 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:29.094 05:14:48 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:29.094 05:14:48 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:29.094 05:14:48 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:29.094 05:14:48 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:29.094 05:14:48 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:29.094 05:14:48 -- ftl/fio.sh@45 -- # svcpid=71970 00:17:29.094 05:14:48 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:29.094 05:14:48 -- ftl/fio.sh@46 -- # waitforlisten 71970 00:17:29.094 05:14:48 -- common/autotest_common.sh@819 -- # '[' -z 71970 ']' 00:17:29.094 05:14:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.094 05:14:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.094 05:14:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.094 05:14:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.094 05:14:48 -- common/autotest_common.sh@10 -- # set +x 00:17:29.094 [2024-07-26 05:14:48.195507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:29.094 [2024-07-26 05:14:48.195667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71970 ] 00:17:29.354 [2024-07-26 05:14:48.384163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.612 [2024-07-26 05:14:48.688712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.612 [2024-07-26 05:14:48.692259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.612 [2024-07-26 05:14:48.692379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.612 [2024-07-26 05:14:48.692405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.989 05:14:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.989 05:14:49 -- common/autotest_common.sh@852 -- # return 0 00:17:30.989 05:14:49 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:17:30.989 05:14:49 -- ftl/common.sh@54 -- # local name=nvme0 00:17:30.989 05:14:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:17:30.989 05:14:49 -- ftl/common.sh@56 -- # local size=103424 00:17:30.989 05:14:49 -- ftl/common.sh@59 -- # local base_bdev 00:17:30.989 05:14:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:17:30.989 05:14:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:30.989 05:14:49 -- ftl/common.sh@62 -- # local base_size 00:17:30.989 05:14:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:30.989 05:14:49 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:17:30.989 05:14:49 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:30.989 05:14:49 -- common/autotest_common.sh@1359 -- # local bs 00:17:30.989 05:14:49 -- common/autotest_common.sh@1360 -- # local nb 00:17:30.989 05:14:49 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:31.248 05:14:50 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:31.248 { 00:17:31.248 "name": "nvme0n1", 00:17:31.248 "aliases": [ 00:17:31.248 "f5cdab03-90f5-423e-b45b-5ebca0038585" 00:17:31.248 ], 00:17:31.248 "product_name": "NVMe disk", 00:17:31.248 "block_size": 4096, 00:17:31.248 "num_blocks": 1310720, 00:17:31.248 "uuid": "f5cdab03-90f5-423e-b45b-5ebca0038585", 00:17:31.248 "assigned_rate_limits": { 00:17:31.248 "rw_ios_per_sec": 0, 00:17:31.248 "rw_mbytes_per_sec": 0, 00:17:31.248 "r_mbytes_per_sec": 0, 00:17:31.248 "w_mbytes_per_sec": 0 00:17:31.248 }, 00:17:31.248 "claimed": false, 00:17:31.248 "zoned": false, 00:17:31.248 "supported_io_types": { 00:17:31.248 "read": true, 00:17:31.248 "write": true, 00:17:31.248 "unmap": true, 00:17:31.248 "write_zeroes": true, 00:17:31.248 "flush": true, 00:17:31.248 "reset": true, 00:17:31.248 "compare": true, 00:17:31.248 "compare_and_write": false, 00:17:31.248 "abort": true, 00:17:31.248 "nvme_admin": true, 00:17:31.248 "nvme_io": true 00:17:31.248 }, 00:17:31.248 "driver_specific": { 00:17:31.248 "nvme": [ 00:17:31.248 { 00:17:31.248 "pci_address": "0000:00:07.0", 00:17:31.248 "trid": { 00:17:31.248 "trtype": "PCIe", 00:17:31.248 "traddr": "0000:00:07.0" 00:17:31.248 }, 00:17:31.248 "ctrlr_data": { 00:17:31.248 "cntlid": 0, 00:17:31.248 "vendor_id": "0x1b36", 00:17:31.248 "model_number": "QEMU NVMe Ctrl", 00:17:31.248 "serial_number": "12341", 00:17:31.248 "firmware_revision": "8.0.0", 00:17:31.248 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:31.248 "oacs": { 00:17:31.248 "security": 0, 00:17:31.248 "format": 1, 00:17:31.248 "firmware": 0, 00:17:31.248 "ns_manage": 1 00:17:31.248 }, 00:17:31.248 "multi_ctrlr": false, 00:17:31.248 "ana_reporting": false 00:17:31.248 }, 00:17:31.248 "vs": { 00:17:31.248 "nvme_version": "1.4" 00:17:31.248 }, 00:17:31.248 "ns_data": { 00:17:31.248 "id": 1, 00:17:31.248 "can_share": false 00:17:31.248 } 00:17:31.248 } 00:17:31.248 ], 00:17:31.248 "mp_policy": "active_passive" 00:17:31.248 } 00:17:31.248 } 00:17:31.248 ]' 00:17:31.248 05:14:50 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:31.248 05:14:50 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:31.248 05:14:50 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:31.248 05:14:50 -- common/autotest_common.sh@1363 -- # nb=1310720 00:17:31.248 05:14:50 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:17:31.248 05:14:50 -- common/autotest_common.sh@1367 -- # echo 5120 00:17:31.248 05:14:50 -- ftl/common.sh@63 -- # base_size=5120 00:17:31.248 05:14:50 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:31.248 05:14:50 -- ftl/common.sh@67 -- # clear_lvols 00:17:31.248 05:14:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:31.248 05:14:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:31.508 05:14:50 -- ftl/common.sh@28 -- # stores= 00:17:31.508 05:14:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:31.769 05:14:50 -- ftl/common.sh@68 -- # lvs=586468da-d51d-4fb7-91f6-0cd94747fd2d 00:17:31.769 05:14:50 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 586468da-d51d-4fb7-91f6-0cd94747fd2d 00:17:32.027 05:14:50 -- ftl/fio.sh@48 -- # split_bdev=0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.027 05:14:50 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.027 05:14:50 -- ftl/common.sh@35 -- # local name=nvc0 00:17:32.027 05:14:50 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:17:32.027 05:14:50 -- ftl/common.sh@37 -- # local base_bdev=0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.027 05:14:50 -- ftl/common.sh@38 -- # local cache_size= 00:17:32.027 05:14:50 -- ftl/common.sh@41 -- # get_bdev_size 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.027 05:14:50 -- common/autotest_common.sh@1357 -- # local bdev_name=0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.027 05:14:50 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:32.027 05:14:50 -- common/autotest_common.sh@1359 -- # local bs 00:17:32.027 05:14:50 -- common/autotest_common.sh@1360 -- # local nb 00:17:32.027 05:14:50 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.285 05:14:51 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:32.285 { 00:17:32.285 "name": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:32.285 "aliases": [ 00:17:32.285 "lvs/nvme0n1p0" 00:17:32.285 ], 00:17:32.285 "product_name": "Logical Volume", 00:17:32.285 "block_size": 4096, 00:17:32.285 "num_blocks": 26476544, 00:17:32.285 "uuid": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:32.285 "assigned_rate_limits": { 00:17:32.285 "rw_ios_per_sec": 0, 00:17:32.285 "rw_mbytes_per_sec": 0, 00:17:32.285 "r_mbytes_per_sec": 0, 00:17:32.285 "w_mbytes_per_sec": 0 00:17:32.285 }, 00:17:32.285 "claimed": false, 00:17:32.285 "zoned": false, 00:17:32.285 "supported_io_types": { 00:17:32.285 "read": true, 00:17:32.285 "write": true, 00:17:32.285 "unmap": true, 00:17:32.285 "write_zeroes": true, 00:17:32.285 "flush": false, 00:17:32.285 "reset": true, 00:17:32.285 "compare": false, 00:17:32.285 "compare_and_write": false, 00:17:32.285 "abort": false, 00:17:32.285 "nvme_admin": false, 00:17:32.285 "nvme_io": false 00:17:32.285 }, 00:17:32.285 "driver_specific": { 00:17:32.285 "lvol": { 00:17:32.285 "lvol_store_uuid": "586468da-d51d-4fb7-91f6-0cd94747fd2d", 00:17:32.285 "base_bdev": "nvme0n1", 00:17:32.285 "thin_provision": true, 00:17:32.285 "snapshot": false, 00:17:32.285 "clone": false, 00:17:32.285 "esnap_clone": false 00:17:32.285 } 00:17:32.285 } 00:17:32.285 } 00:17:32.285 ]' 00:17:32.285 05:14:51 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:32.285 05:14:51 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:32.285 05:14:51 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:32.285 05:14:51 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:32.285 05:14:51 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:32.285 05:14:51 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:32.285 05:14:51 -- ftl/common.sh@41 -- # local base_size=5171 00:17:32.285 05:14:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:32.285 05:14:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:17:32.542 05:14:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:32.542 05:14:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:32.542 05:14:51 -- ftl/common.sh@48 -- # get_bdev_size 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.542 05:14:51 -- common/autotest_common.sh@1357 -- # local bdev_name=0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.542 05:14:51 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:32.542 05:14:51 -- common/autotest_common.sh@1359 -- # local bs 00:17:32.542 05:14:51 -- common/autotest_common.sh@1360 -- # local nb 00:17:32.542 05:14:51 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:32.801 05:14:51 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:32.801 { 00:17:32.801 "name": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:32.801 "aliases": [ 00:17:32.801 "lvs/nvme0n1p0" 00:17:32.801 ], 00:17:32.801 "product_name": "Logical Volume", 00:17:32.801 "block_size": 4096, 00:17:32.801 "num_blocks": 26476544, 00:17:32.801 "uuid": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:32.801 "assigned_rate_limits": { 00:17:32.801 "rw_ios_per_sec": 0, 00:17:32.801 "rw_mbytes_per_sec": 0, 00:17:32.801 "r_mbytes_per_sec": 0, 00:17:32.801 "w_mbytes_per_sec": 0 00:17:32.801 }, 00:17:32.801 "claimed": false, 00:17:32.801 "zoned": false, 00:17:32.801 "supported_io_types": { 00:17:32.801 "read": true, 00:17:32.801 "write": true, 00:17:32.801 "unmap": true, 00:17:32.801 "write_zeroes": true, 00:17:32.801 "flush": false, 00:17:32.801 "reset": true, 00:17:32.801 "compare": false, 00:17:32.801 "compare_and_write": false, 00:17:32.801 "abort": false, 00:17:32.801 "nvme_admin": false, 00:17:32.801 "nvme_io": false 00:17:32.801 }, 00:17:32.801 "driver_specific": { 00:17:32.801 "lvol": { 00:17:32.801 "lvol_store_uuid": "586468da-d51d-4fb7-91f6-0cd94747fd2d", 00:17:32.801 "base_bdev": "nvme0n1", 00:17:32.801 "thin_provision": true, 00:17:32.801 "snapshot": false, 00:17:32.801 "clone": false, 00:17:32.801 "esnap_clone": false 00:17:32.801 } 00:17:32.801 } 00:17:32.801 } 00:17:32.801 ]' 00:17:32.801 05:14:51 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:32.801 05:14:51 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:32.801 05:14:51 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:32.801 05:14:51 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:32.801 05:14:51 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:32.801 05:14:51 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:32.801 05:14:51 -- ftl/common.sh@48 -- # cache_size=5171 00:17:32.801 05:14:51 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:33.060 05:14:52 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:33.060 05:14:52 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:33.060 05:14:52 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:33.060 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:33.060 05:14:52 -- ftl/fio.sh@56 -- # get_bdev_size 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:33.060 05:14:52 -- common/autotest_common.sh@1357 -- # local bdev_name=0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:33.060 05:14:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:17:33.060 05:14:52 -- common/autotest_common.sh@1359 -- # local bs 00:17:33.060 05:14:52 -- common/autotest_common.sh@1360 -- # local nb 00:17:33.060 05:14:52 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a70669e-dab6-40bf-9fb4-88b54f225b4a 00:17:33.320 05:14:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:17:33.320 { 00:17:33.320 "name": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:33.320 "aliases": [ 00:17:33.320 "lvs/nvme0n1p0" 00:17:33.320 ], 00:17:33.320 "product_name": "Logical Volume", 00:17:33.320 "block_size": 4096, 00:17:33.320 "num_blocks": 26476544, 00:17:33.320 "uuid": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:33.320 "assigned_rate_limits": { 00:17:33.320 "rw_ios_per_sec": 0, 00:17:33.320 "rw_mbytes_per_sec": 0, 00:17:33.320 "r_mbytes_per_sec": 0, 00:17:33.320 "w_mbytes_per_sec": 0 00:17:33.320 }, 00:17:33.320 "claimed": false, 00:17:33.320 "zoned": false, 00:17:33.320 "supported_io_types": { 00:17:33.320 "read": true, 00:17:33.320 "write": true, 00:17:33.320 "unmap": true, 00:17:33.320 "write_zeroes": true, 00:17:33.320 "flush": false, 00:17:33.320 "reset": true, 00:17:33.320 "compare": false, 00:17:33.320 "compare_and_write": false, 00:17:33.320 "abort": false, 00:17:33.320 "nvme_admin": false, 00:17:33.320 "nvme_io": false 00:17:33.320 }, 00:17:33.320 "driver_specific": { 00:17:33.320 "lvol": { 00:17:33.320 "lvol_store_uuid": "586468da-d51d-4fb7-91f6-0cd94747fd2d", 00:17:33.320 "base_bdev": "nvme0n1", 00:17:33.320 "thin_provision": true, 00:17:33.320 "snapshot": false, 00:17:33.320 "clone": false, 00:17:33.320 "esnap_clone": false 00:17:33.320 } 00:17:33.320 } 00:17:33.320 } 00:17:33.320 ]' 00:17:33.320 05:14:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:17:33.320 05:14:52 -- common/autotest_common.sh@1362 -- # bs=4096 00:17:33.320 05:14:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:17:33.320 05:14:52 -- common/autotest_common.sh@1363 -- # nb=26476544 00:17:33.320 05:14:52 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:17:33.320 05:14:52 -- common/autotest_common.sh@1367 -- # echo 103424 00:17:33.320 05:14:52 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:33.320 05:14:52 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:33.320 05:14:52 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0a70669e-dab6-40bf-9fb4-88b54f225b4a -c nvc0n1p0 --l2p_dram_limit 60 00:17:33.579 [2024-07-26 05:14:52.460090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.460143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:33.579 [2024-07-26 05:14:52.460162] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:33.579 [2024-07-26 05:14:52.460173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.460291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.460306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.579 [2024-07-26 05:14:52.460338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:33.579 [2024-07-26 05:14:52.460348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.460382] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:33.579 [2024-07-26 05:14:52.461600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:33.579 [2024-07-26 05:14:52.461634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.461646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.579 [2024-07-26 05:14:52.461660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:17:33.579 [2024-07-26 05:14:52.461669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.461759] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID de76b76b-7c39-4503-8dbc-d7eee7b4ae74 00:17:33.579 [2024-07-26 05:14:52.463165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.463195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:33.579 [2024-07-26 05:14:52.463218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:33.579 [2024-07-26 05:14:52.463232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.470739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.470775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.579 [2024-07-26 05:14:52.470787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.435 ms 00:17:33.579 [2024-07-26 05:14:52.470820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.470918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.470935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.579 [2024-07-26 05:14:52.470946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:33.579 [2024-07-26 05:14:52.470962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.471046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.471062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:33.579 [2024-07-26 05:14:52.471072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:33.579 [2024-07-26 05:14:52.471085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.471123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.579 [2024-07-26 05:14:52.477316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.477347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.579 [2024-07-26 05:14:52.477362] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.202 ms 00:17:33.579 [2024-07-26 05:14:52.477375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.579 [2024-07-26 05:14:52.477423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.579 [2024-07-26 05:14:52.477433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:33.579 [2024-07-26 05:14:52.477447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:33.580 [2024-07-26 05:14:52.477457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-07-26 05:14:52.477506] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:33.580 [2024-07-26 05:14:52.477620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:33.580 [2024-07-26 05:14:52.477640] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:33.580 [2024-07-26 05:14:52.477653] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:33.580 [2024-07-26 05:14:52.477669] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:33.580 [2024-07-26 05:14:52.477681] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:33.580 [2024-07-26 05:14:52.477694] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:33.580 [2024-07-26 05:14:52.477705] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:33.580 [2024-07-26 05:14:52.477719] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:33.580 [2024-07-26 05:14:52.477729] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:33.580 [2024-07-26 05:14:52.477742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-07-26 05:14:52.477754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:33.580 [2024-07-26 05:14:52.477767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:17:33.580 [2024-07-26 05:14:52.477777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-07-26 05:14:52.477847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-07-26 05:14:52.477859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:33.580 [2024-07-26 05:14:52.477871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:33.580 [2024-07-26 05:14:52.477881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-07-26 05:14:52.477976] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:33.580 [2024-07-26 05:14:52.477987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:33.580 [2024-07-26 05:14:52.478004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:33.580 [2024-07-26 05:14:52.478037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:33.580 [2024-07-26 05:14:52.478071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.580 [2024-07-26 05:14:52.478091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:33.580 [2024-07-26 05:14:52.478102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:33.580 [2024-07-26 05:14:52.478115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:33.580 [2024-07-26 05:14:52.478124] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:33.580 [2024-07-26 05:14:52.478136] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:33.580 [2024-07-26 05:14:52.478145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:33.580 [2024-07-26 05:14:52.478171] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:33.580 [2024-07-26 05:14:52.478183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:33.580 [2024-07-26 05:14:52.478203] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:33.580 [2024-07-26 05:14:52.478229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478241] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:33.580 [2024-07-26 05:14:52.478251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478272] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:33.580 [2024-07-26 05:14:52.478283] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:33.580 [2024-07-26 05:14:52.478313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:33.580 [2024-07-26 05:14:52.478348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478357] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:33.580 [2024-07-26 05:14:52.478378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.580 [2024-07-26 05:14:52.478398] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:33.580 [2024-07-26 05:14:52.478429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:33.580 [2024-07-26 05:14:52.478438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:33.580 [2024-07-26 05:14:52.478449] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:33.580 [2024-07-26 05:14:52.478459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:33.580 [2024-07-26 05:14:52.478471] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:33.580 [2024-07-26 05:14:52.478494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:33.580 [2024-07-26 05:14:52.478503] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:33.580 [2024-07-26 05:14:52.478515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:33.580 [2024-07-26 05:14:52.478524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:33.580 [2024-07-26 05:14:52.478538] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:33.580 [2024-07-26 05:14:52.478550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:33.580 [2024-07-26 05:14:52.478563] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:33.580 [2024-07-26 05:14:52.478575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.580 [2024-07-26 05:14:52.478589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:33.580 [2024-07-26 05:14:52.478600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:33.580 [2024-07-26 05:14:52.478613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:33.580 [2024-07-26 05:14:52.478624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:33.580 [2024-07-26 05:14:52.478637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:33.580 [2024-07-26 05:14:52.478647] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:33.580 [2024-07-26 05:14:52.478660] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:33.580 [2024-07-26 05:14:52.478670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:33.580 [2024-07-26 05:14:52.478683] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:33.580 [2024-07-26 05:14:52.478693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:33.580 [2024-07-26 05:14:52.478714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:33.580 [2024-07-26 05:14:52.478725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:33.580 [2024-07-26 05:14:52.478746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:33.580 [2024-07-26 05:14:52.478757] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:33.580 [2024-07-26 05:14:52.478773] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:33.580 [2024-07-26 05:14:52.478789] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:33.580 [2024-07-26 05:14:52.478805] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:33.580 [2024-07-26 05:14:52.478816] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:33.580 [2024-07-26 05:14:52.478832] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:33.580 [2024-07-26 05:14:52.478843] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-07-26 05:14:52.478859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:33.580 [2024-07-26 05:14:52.478873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:17:33.580 [2024-07-26 05:14:52.478885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.580 [2024-07-26 05:14:52.504550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.580 [2024-07-26 05:14:52.504594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.580 [2024-07-26 05:14:52.504608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.582 ms 00:17:33.581 [2024-07-26 05:14:52.504620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.504719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.504735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:33.581 [2024-07-26 05:14:52.504747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:33.581 [2024-07-26 05:14:52.504759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.561385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.561435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.581 [2024-07-26 05:14:52.561453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.558 ms 00:17:33.581 [2024-07-26 05:14:52.561466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.561519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.561532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.581 [2024-07-26 05:14:52.561544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.581 [2024-07-26 05:14:52.561556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.562038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.562054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.581 [2024-07-26 05:14:52.562066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:33.581 [2024-07-26 05:14:52.562083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.562225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.562245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.581 [2024-07-26 05:14:52.562257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:33.581 [2024-07-26 05:14:52.562269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.603740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.603795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.581 [2024-07-26 05:14:52.603816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.439 ms 00:17:33.581 [2024-07-26 05:14:52.603834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.581 [2024-07-26 05:14:52.618385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:33.581 [2024-07-26 05:14:52.635068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.581 [2024-07-26 05:14:52.635123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:33.581 [2024-07-26 05:14:52.635143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.052 ms 00:17:33.581 [2024-07-26 05:14:52.635157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.839 [2024-07-26 05:14:52.707689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.839 [2024-07-26 05:14:52.707750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:33.839 [2024-07-26 05:14:52.707786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.446 ms 00:17:33.839 [2024-07-26 05:14:52.707800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.839 [2024-07-26 05:14:52.707862] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:33.839 [2024-07-26 05:14:52.707876] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:37.119 [2024-07-26 05:14:55.883967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:55.884033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:37.119 [2024-07-26 05:14:55.884053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3176.085 ms 00:17:37.119 [2024-07-26 05:14:55.884064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:55.884328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:55.884342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.119 [2024-07-26 05:14:55.884356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:17:37.119 [2024-07-26 05:14:55.884366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:55.922707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:55.922759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:37.119 [2024-07-26 05:14:55.922778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.273 ms 00:17:37.119 [2024-07-26 05:14:55.922788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:55.960882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:55.960917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:37.119 [2024-07-26 05:14:55.960937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.040 ms 00:17:37.119 [2024-07-26 05:14:55.960946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:55.961430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:55.961445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.119 [2024-07-26 05:14:55.961458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:17:37.119 [2024-07-26 05:14:55.961471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.057500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.057546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:37.119 [2024-07-26 05:14:56.057563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.964 ms 00:17:37.119 [2024-07-26 05:14:56.057591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.096502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.096541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:37.119 [2024-07-26 05:14:56.096558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.860 ms 00:17:37.119 [2024-07-26 05:14:56.096568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.101159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.101194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:37.119 [2024-07-26 05:14:56.101234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.532 ms 00:17:37.119 [2024-07-26 05:14:56.101245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.140283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.140320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.119 [2024-07-26 05:14:56.140336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.966 ms 00:17:37.119 [2024-07-26 05:14:56.140362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.140434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.140448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.119 [2024-07-26 05:14:56.140468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:37.119 [2024-07-26 05:14:56.140479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.140609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.119 [2024-07-26 05:14:56.140621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.119 [2024-07-26 05:14:56.140637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:37.119 [2024-07-26 05:14:56.140647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.119 [2024-07-26 05:14:56.141786] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3681.197 ms, result 0 00:17:37.119 { 00:17:37.119 "name": "ftl0", 00:17:37.119 "uuid": "de76b76b-7c39-4503-8dbc-d7eee7b4ae74" 00:17:37.119 } 00:17:37.119 05:14:56 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:37.119 05:14:56 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:17:37.119 05:14:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:37.119 05:14:56 -- common/autotest_common.sh@889 -- # local i 00:17:37.119 05:14:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:37.119 05:14:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:37.119 05:14:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:37.378 05:14:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:37.637 [ 00:17:37.637 { 00:17:37.637 "name": "ftl0", 00:17:37.637 "aliases": [ 00:17:37.637 "de76b76b-7c39-4503-8dbc-d7eee7b4ae74" 00:17:37.637 ], 00:17:37.637 "product_name": "FTL disk", 00:17:37.637 "block_size": 4096, 00:17:37.637 "num_blocks": 20971520, 00:17:37.637 "uuid": "de76b76b-7c39-4503-8dbc-d7eee7b4ae74", 00:17:37.637 "assigned_rate_limits": { 00:17:37.637 "rw_ios_per_sec": 0, 00:17:37.637 "rw_mbytes_per_sec": 0, 00:17:37.637 "r_mbytes_per_sec": 0, 00:17:37.637 "w_mbytes_per_sec": 0 00:17:37.637 }, 00:17:37.637 "claimed": false, 00:17:37.637 "zoned": false, 00:17:37.637 "supported_io_types": { 00:17:37.637 "read": true, 00:17:37.637 "write": true, 00:17:37.637 "unmap": true, 00:17:37.637 "write_zeroes": true, 00:17:37.637 "flush": true, 00:17:37.637 "reset": false, 00:17:37.637 "compare": false, 00:17:37.637 "compare_and_write": false, 00:17:37.637 "abort": false, 00:17:37.637 "nvme_admin": false, 00:17:37.637 "nvme_io": false 00:17:37.637 }, 00:17:37.637 "driver_specific": { 00:17:37.637 "ftl": { 00:17:37.637 "base_bdev": "0a70669e-dab6-40bf-9fb4-88b54f225b4a", 00:17:37.637 "cache": "nvc0n1p0" 00:17:37.637 } 00:17:37.637 } 00:17:37.637 } 00:17:37.637 ] 00:17:37.637 05:14:56 -- common/autotest_common.sh@895 -- # return 0 00:17:37.637 05:14:56 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:37.637 05:14:56 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:37.897 05:14:56 -- ftl/fio.sh@70 -- # echo ']}' 00:17:37.897 05:14:56 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:37.897 [2024-07-26 05:14:56.974428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.897 [2024-07-26 05:14:56.974483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:37.897 [2024-07-26 05:14:56.974500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.897 [2024-07-26 05:14:56.974513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.897 [2024-07-26 05:14:56.974554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.897 [2024-07-26 05:14:56.978057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.897 [2024-07-26 05:14:56.978089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:37.897 [2024-07-26 05:14:56.978104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:17:37.897 [2024-07-26 05:14:56.978115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.897 [2024-07-26 05:14:56.978579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.897 [2024-07-26 05:14:56.978599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:37.897 [2024-07-26 05:14:56.978613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:17:37.897 [2024-07-26 05:14:56.978624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.897 [2024-07-26 05:14:56.981244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.897 [2024-07-26 05:14:56.981266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:37.897 [2024-07-26 05:14:56.981280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:17:37.897 [2024-07-26 05:14:56.981291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.897 [2024-07-26 05:14:56.986534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.897 [2024-07-26 05:14:56.986566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:37.897 [2024-07-26 05:14:56.986584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.203 ms 00:17:37.897 [2024-07-26 05:14:56.986594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.024812] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.024849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.172 [2024-07-26 05:14:57.024866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.117 ms 00:17:38.172 [2024-07-26 05:14:57.024876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.048906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.048946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.172 [2024-07-26 05:14:57.048964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.966 ms 00:17:38.172 [2024-07-26 05:14:57.048975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.049185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.049202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.172 [2024-07-26 05:14:57.049237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:38.172 [2024-07-26 05:14:57.049265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.088157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.088193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:38.172 [2024-07-26 05:14:57.088219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.852 ms 00:17:38.172 [2024-07-26 05:14:57.088230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.126158] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.126193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:38.172 [2024-07-26 05:14:57.126219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.855 ms 00:17:38.172 [2024-07-26 05:14:57.126230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.164780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.164814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.172 [2024-07-26 05:14:57.164830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.502 ms 00:17:38.172 [2024-07-26 05:14:57.164840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.202549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.172 [2024-07-26 05:14:57.202583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.172 [2024-07-26 05:14:57.202598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.583 ms 00:17:38.172 [2024-07-26 05:14:57.202625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.172 [2024-07-26 05:14:57.202680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.172 [2024-07-26 05:14:57.202697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.202999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.172 [2024-07-26 05:14:57.203123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.173 [2024-07-26 05:14:57.203970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.173 [2024-07-26 05:14:57.203982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de76b76b-7c39-4503-8dbc-d7eee7b4ae74 00:17:38.173 [2024-07-26 05:14:57.203993] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.173 [2024-07-26 05:14:57.204005] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.173 [2024-07-26 05:14:57.204014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.173 [2024-07-26 05:14:57.204028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.173 [2024-07-26 05:14:57.204037] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.173 [2024-07-26 05:14:57.204054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.173 [2024-07-26 05:14:57.204064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.173 [2024-07-26 05:14:57.204075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.173 [2024-07-26 05:14:57.204084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.173 [2024-07-26 05:14:57.204098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.173 [2024-07-26 05:14:57.204108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.173 [2024-07-26 05:14:57.204122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:17:38.173 [2024-07-26 05:14:57.204134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.173 [2024-07-26 05:14:57.224113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.173 [2024-07-26 05:14:57.224145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.173 [2024-07-26 05:14:57.224160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.913 ms 00:17:38.173 [2024-07-26 05:14:57.224171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.173 [2024-07-26 05:14:57.224484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.173 [2024-07-26 05:14:57.224501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.173 [2024-07-26 05:14:57.224517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:38.173 [2024-07-26 05:14:57.224527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.294375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.294420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.439 [2024-07-26 05:14:57.294437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.294447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.294525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.294536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.439 [2024-07-26 05:14:57.294553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.294564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.294670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.294684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.439 [2024-07-26 05:14:57.294697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.294707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.294743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.294754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.439 [2024-07-26 05:14:57.294767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.294780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.432897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.432955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.439 [2024-07-26 05:14:57.432973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.433000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.479786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.479836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.439 [2024-07-26 05:14:57.479857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.479867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.479966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.479977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.439 [2024-07-26 05:14:57.479991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.480079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.439 [2024-07-26 05:14:57.480091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.480277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.439 [2024-07-26 05:14:57.480290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.480377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:38.439 [2024-07-26 05:14:57.480390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.480465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.439 [2024-07-26 05:14:57.480478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.439 [2024-07-26 05:14:57.480557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.439 [2024-07-26 05:14:57.480569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.439 [2024-07-26 05:14:57.480579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.439 [2024-07-26 05:14:57.480756] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.291 ms, result 0 00:17:38.439 true 00:17:38.439 05:14:57 -- ftl/fio.sh@75 -- # killprocess 71970 00:17:38.439 05:14:57 -- common/autotest_common.sh@926 -- # '[' -z 71970 ']' 00:17:38.439 05:14:57 -- common/autotest_common.sh@930 -- # kill -0 71970 00:17:38.439 05:14:57 -- common/autotest_common.sh@931 -- # uname 00:17:38.439 05:14:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.439 05:14:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71970 00:17:38.439 killing process with pid 71970 00:17:38.439 05:14:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.439 05:14:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.439 05:14:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71970' 00:17:38.439 05:14:57 -- common/autotest_common.sh@945 -- # kill 71970 00:17:38.439 05:14:57 -- common/autotest_common.sh@950 -- # wait 71970 00:17:43.707 05:15:02 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:43.707 05:15:02 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:43.707 05:15:02 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:43.707 05:15:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:43.707 05:15:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.707 05:15:02 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:43.707 05:15:02 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:43.707 05:15:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:43.707 05:15:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:43.707 05:15:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:43.707 05:15:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:43.707 05:15:02 -- common/autotest_common.sh@1320 -- # shift 00:17:43.707 05:15:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:43.707 05:15:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:43.707 05:15:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:43.707 05:15:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:43.707 05:15:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:43.707 05:15:02 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:43.707 05:15:02 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:43.707 05:15:02 -- common/autotest_common.sh@1326 -- # break 00:17:43.707 05:15:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:43.707 05:15:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:43.707 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:43.707 fio-3.35 00:17:43.707 Starting 1 thread 00:17:50.270 00:17:50.270 test: (groupid=0, jobs=1): err= 0: pid=72201: Fri Jul 26 05:15:08 2024 00:17:50.270 read: IOPS=951, BW=63.2MiB/s (66.3MB/s)(255MiB/4027msec) 00:17:50.270 slat (nsec): min=4330, max=32255, avg=6108.56, stdev=2191.01 00:17:50.270 clat (usec): min=265, max=967, avg=434.26, stdev=59.84 00:17:50.270 lat (usec): min=281, max=972, avg=440.37, stdev=60.25 00:17:50.270 clat percentiles (usec): 00:17:50.270 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 371], 20.00th=[ 383], 00:17:50.270 | 30.00th=[ 392], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 449], 00:17:50.270 | 70.00th=[ 457], 80.00th=[ 482], 90.00th=[ 515], 95.00th=[ 529], 00:17:50.270 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 693], 99.95th=[ 832], 00:17:50.270 | 99.99th=[ 971] 00:17:50.270 write: IOPS=958, BW=63.6MiB/s (66.7MB/s)(256MiB/4023msec); 0 zone resets 00:17:50.270 slat (nsec): min=16046, max=63997, avg=20596.42, stdev=4542.63 00:17:50.270 clat (usec): min=350, max=2637, avg=572.64, stdev=93.11 00:17:50.270 lat (usec): min=374, max=2654, avg=593.23, stdev=93.50 00:17:50.270 clat percentiles (usec): 00:17:50.270 | 1.00th=[ 400], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 510], 00:17:50.270 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 594], 00:17:50.270 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 668], 95.00th=[ 709], 00:17:50.270 | 99.00th=[ 898], 99.50th=[ 971], 99.90th=[ 1037], 99.95th=[ 1467], 00:17:50.270 | 99.99th=[ 2638] 00:17:50.270 bw ( KiB/s): min=60520, max=67728, per=100.00%, avg=65229.00, stdev=2465.14, samples=8 00:17:50.270 iops : min= 890, max= 996, avg=959.25, stdev=36.25, samples=8 00:17:50.270 lat (usec) : 500=51.01%, 750=47.72%, 1000=1.13% 00:17:50.270 lat (msec) : 2=0.13%, 4=0.01% 00:17:50.270 cpu : usr=99.28%, sys=0.07%, ctx=7, majf=0, minf=1318 00:17:50.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.270 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.270 00:17:50.270 Run status group 0 (all jobs): 00:17:50.271 READ: bw=63.2MiB/s (66.3MB/s), 63.2MiB/s-63.2MiB/s (66.3MB/s-66.3MB/s), io=255MiB (267MB), run=4027-4027msec 00:17:50.271 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=256MiB (269MB), run=4023-4023msec 00:17:50.837 ----------------------------------------------------- 00:17:50.837 Suppressions used: 00:17:50.837 count bytes template 00:17:50.837 1 5 /usr/src/fio/parse.c 00:17:50.837 1 8 libtcmalloc_minimal.so 00:17:50.837 1 904 libcrypto.so 00:17:50.837 ----------------------------------------------------- 00:17:50.837 00:17:50.837 05:15:09 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:50.837 05:15:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:50.837 05:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:50.837 05:15:09 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:50.837 05:15:09 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:50.837 05:15:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.837 05:15:09 -- common/autotest_common.sh@10 -- # set +x 00:17:50.837 05:15:09 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:50.837 05:15:09 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:50.837 05:15:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:17:50.837 05:15:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.837 05:15:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:17:50.837 05:15:09 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.837 05:15:09 -- common/autotest_common.sh@1320 -- # shift 00:17:50.837 05:15:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:17:50.837 05:15:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.837 05:15:09 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:50.837 05:15:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:17:50.837 05:15:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:17:50.837 05:15:09 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:50.837 05:15:09 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:50.837 05:15:09 -- common/autotest_common.sh@1326 -- # break 00:17:50.837 05:15:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:50.837 05:15:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:51.096 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:51.096 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:51.096 fio-3.35 00:17:51.096 Starting 2 threads 00:18:17.695 00:18:17.695 first_half: (groupid=0, jobs=1): err= 0: pid=72305: Fri Jul 26 05:15:36 2024 00:18:17.695 read: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(255MiB/24377msec) 00:18:17.695 slat (nsec): min=3431, max=44853, avg=6060.68, stdev=2113.02 00:18:17.695 clat (usec): min=829, max=364808, avg=37356.48, stdev=20861.84 00:18:17.695 lat (usec): min=838, max=364814, avg=37362.54, stdev=20862.08 00:18:17.695 clat percentiles (msec): 00:18:17.695 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:18:17.695 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:18:17.695 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 41], 95.00th=[ 55], 00:18:17.695 | 99.00th=[ 157], 99.50th=[ 176], 99.90th=[ 236], 99.95th=[ 309], 00:18:17.695 | 99.99th=[ 355] 00:18:17.695 write: IOPS=3197, BW=12.5MiB/s (13.1MB/s)(256MiB/20497msec); 0 zone resets 00:18:17.695 slat (usec): min=4, max=1199, avg= 8.53, stdev= 7.76 00:18:17.695 clat (usec): min=409, max=94522, avg=10357.34, stdev=17407.47 00:18:17.695 lat (usec): min=420, max=94529, avg=10365.86, stdev=17407.75 00:18:17.695 clat percentiles (usec): 00:18:17.695 | 1.00th=[ 947], 5.00th=[ 1205], 10.00th=[ 1418], 20.00th=[ 2311], 00:18:17.695 | 30.00th=[ 3752], 40.00th=[ 4948], 50.00th=[ 5538], 60.00th=[ 6521], 00:18:17.695 | 70.00th=[ 7439], 80.00th=[11863], 90.00th=[14615], 95.00th=[69731], 00:18:17.695 | 99.00th=[85459], 99.50th=[87557], 99.90th=[90702], 99.95th=[91751], 00:18:17.695 | 99.99th=[93848] 00:18:17.695 bw ( KiB/s): min= 1848, max=39952, per=93.34%, avg=21845.33, stdev=13667.98, samples=24 00:18:17.695 iops : min= 462, max= 9988, avg=5461.33, stdev=3416.99, samples=24 00:18:17.695 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.66% 00:18:17.695 lat (msec) : 2=8.57%, 4=7.21%, 10=22.08%, 20=8.69%, 50=47.00% 00:18:17.695 lat (msec) : 100=4.51%, 250=1.17%, 500=0.05% 00:18:17.695 cpu : usr=99.07%, sys=0.36%, ctx=64, majf=0, minf=5578 00:18:17.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:17.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.695 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.695 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.695 second_half: (groupid=0, jobs=1): err= 0: pid=72306: Fri Jul 26 05:15:36 2024 00:18:17.695 read: IOPS=2658, BW=10.4MiB/s (10.9MB/s)(255MiB/24562msec) 00:18:17.695 slat (nsec): min=3464, max=43060, avg=5967.95, stdev=2072.61 00:18:17.695 clat (usec): min=857, max=374941, avg=36822.70, stdev=23351.04 00:18:17.695 lat (usec): min=865, max=374947, avg=36828.67, stdev=23351.33 00:18:17.695 clat percentiles (msec): 00:18:17.695 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:18:17.695 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:18:17.695 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 39], 95.00th=[ 56], 00:18:17.696 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 255], 99.95th=[ 313], 00:18:17.696 | 99.99th=[ 368] 00:18:17.696 write: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(256MiB/22402msec); 0 zone resets 00:18:17.696 slat (usec): min=4, max=150, avg= 8.43, stdev= 4.25 00:18:17.696 clat (usec): min=440, max=95612, avg=11253.98, stdev=18622.37 00:18:17.696 lat (usec): min=450, max=95620, avg=11262.41, stdev=18622.65 00:18:17.696 clat percentiles (usec): 00:18:17.696 | 1.00th=[ 898], 5.00th=[ 1139], 10.00th=[ 1287], 20.00th=[ 1549], 00:18:17.696 | 30.00th=[ 1975], 40.00th=[ 3523], 50.00th=[ 4883], 60.00th=[ 6194], 00:18:17.696 | 70.00th=[ 8225], 80.00th=[13173], 90.00th=[31851], 95.00th=[70779], 00:18:17.696 | 99.00th=[86508], 99.50th=[88605], 99.90th=[91751], 99.95th=[92799], 00:18:17.696 | 99.99th=[94897] 00:18:17.696 bw ( KiB/s): min= 1016, max=56520, per=89.61%, avg=20973.44, stdev=14463.29, samples=25 00:18:17.696 iops : min= 254, max=14130, avg=5243.32, stdev=3615.83, samples=25 00:18:17.696 lat (usec) : 500=0.01%, 750=0.13%, 1000=0.92% 00:18:17.696 lat (msec) : 2=14.23%, 4=7.27%, 10=15.54%, 20=8.21%, 50=48.40% 00:18:17.696 lat (msec) : 100=3.71%, 250=1.53%, 500=0.05% 00:18:17.696 cpu : usr=99.19%, sys=0.31%, ctx=62, majf=0, minf=5538 00:18:17.696 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:17.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.696 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.696 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.696 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.696 00:18:17.696 Run status group 0 (all jobs): 00:18:17.696 READ: bw=20.8MiB/s (21.8MB/s), 10.4MiB/s-10.5MiB/s (10.9MB/s-11.0MB/s), io=510MiB (535MB), run=24377-24562msec 00:18:17.696 WRITE: bw=22.9MiB/s (24.0MB/s), 11.4MiB/s-12.5MiB/s (12.0MB/s-13.1MB/s), io=512MiB (537MB), run=20497-22402msec 00:18:19.599 ----------------------------------------------------- 00:18:19.599 Suppressions used: 00:18:19.599 count bytes template 00:18:19.599 2 10 /usr/src/fio/parse.c 00:18:19.599 4 384 /usr/src/fio/iolog.c 00:18:19.599 1 8 libtcmalloc_minimal.so 00:18:19.599 1 904 libcrypto.so 00:18:19.599 ----------------------------------------------------- 00:18:19.599 00:18:19.599 05:15:38 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:19.599 05:15:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:19.599 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:18:19.599 05:15:38 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:19.599 05:15:38 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:19.599 05:15:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:19.599 05:15:38 -- common/autotest_common.sh@10 -- # set +x 00:18:19.599 05:15:38 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:19.600 05:15:38 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:19.600 05:15:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:19.600 05:15:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.600 05:15:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:19.600 05:15:38 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.600 05:15:38 -- common/autotest_common.sh@1320 -- # shift 00:18:19.600 05:15:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:19.600 05:15:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.600 05:15:38 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.600 05:15:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:19.600 05:15:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:19.600 05:15:38 -- common/autotest_common.sh@1324 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:19.600 05:15:38 -- common/autotest_common.sh@1325 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:19.600 05:15:38 -- common/autotest_common.sh@1326 -- # break 00:18:19.600 05:15:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.600 05:15:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:19.858 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:19.858 fio-3.35 00:18:19.858 Starting 1 thread 00:18:34.736 00:18:34.736 test: (groupid=0, jobs=1): err= 0: pid=72635: Fri Jul 26 05:15:53 2024 00:18:34.736 read: IOPS=8166, BW=31.9MiB/s (33.5MB/s)(255MiB/7984msec) 00:18:34.736 slat (nsec): min=3360, max=54171, avg=5274.02, stdev=1976.20 00:18:34.736 clat (usec): min=653, max=29897, avg=15664.19, stdev=989.62 00:18:34.736 lat (usec): min=663, max=29903, avg=15669.47, stdev=989.62 00:18:34.736 clat percentiles (usec): 00:18:34.736 | 1.00th=[14746], 5.00th=[14877], 10.00th=[15008], 20.00th=[15139], 00:18:34.736 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:18:34.736 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16188], 95.00th=[16581], 00:18:34.736 | 99.00th=[19792], 99.50th=[20055], 99.90th=[24773], 99.95th=[26346], 00:18:34.736 | 99.99th=[29492] 00:18:34.736 write: IOPS=13.9k, BW=54.2MiB/s (56.9MB/s)(256MiB/4720msec); 0 zone resets 00:18:34.736 slat (usec): min=4, max=567, avg= 8.19, stdev= 5.71 00:18:34.736 clat (usec): min=547, max=52131, avg=9170.71, stdev=11237.17 00:18:34.736 lat (usec): min=556, max=52139, avg=9178.91, stdev=11237.17 00:18:34.736 clat percentiles (usec): 00:18:34.736 | 1.00th=[ 807], 5.00th=[ 988], 10.00th=[ 1090], 20.00th=[ 1237], 00:18:34.736 | 30.00th=[ 1401], 40.00th=[ 1811], 50.00th=[ 6456], 60.00th=[ 7308], 00:18:34.736 | 70.00th=[ 8225], 80.00th=[ 9765], 90.00th=[32375], 95.00th=[34341], 00:18:34.736 | 99.00th=[41157], 99.50th=[42730], 99.90th=[44827], 99.95th=[46400], 00:18:34.736 | 99.99th=[50594] 00:18:34.736 bw ( KiB/s): min=20360, max=73560, per=94.38%, avg=52417.30, stdev=13902.79, samples=10 00:18:34.736 iops : min= 5090, max=18390, avg=13104.30, stdev=3475.69, samples=10 00:18:34.736 lat (usec) : 750=0.24%, 1000=2.52% 00:18:34.736 lat (msec) : 2=17.73%, 4=0.60%, 10=19.71%, 20=50.91%, 50=8.30% 00:18:34.736 lat (msec) : 100=0.01% 00:18:34.736 cpu : usr=98.66%, sys=0.69%, ctx=23, majf=0, minf=5567 00:18:34.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:34.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.737 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.737 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.737 00:18:34.737 Run status group 0 (all jobs): 00:18:34.737 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=255MiB (267MB), run=7984-7984msec 00:18:34.737 WRITE: bw=54.2MiB/s (56.9MB/s), 54.2MiB/s-54.2MiB/s (56.9MB/s-56.9MB/s), io=256MiB (268MB), run=4720-4720msec 00:18:36.120 ----------------------------------------------------- 00:18:36.120 Suppressions used: 00:18:36.120 count bytes template 00:18:36.120 1 5 /usr/src/fio/parse.c 00:18:36.120 2 192 /usr/src/fio/iolog.c 00:18:36.120 1 8 libtcmalloc_minimal.so 00:18:36.120 1 904 libcrypto.so 00:18:36.120 ----------------------------------------------------- 00:18:36.120 00:18:36.120 05:15:54 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:36.120 05:15:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:36.120 05:15:54 -- common/autotest_common.sh@10 -- # set +x 00:18:36.120 05:15:54 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.120 Remove shared memory files 00:18:36.120 05:15:54 -- ftl/fio.sh@85 -- # remove_shm 00:18:36.120 05:15:54 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:36.120 05:15:54 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:36.120 05:15:54 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:36.120 05:15:54 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56488 /dev/shm/spdk_tgt_trace.pid70871 00:18:36.120 05:15:55 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:36.120 05:15:55 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:36.120 ************************************ 00:18:36.120 END TEST ftl_fio_basic 00:18:36.120 ************************************ 00:18:36.120 00:18:36.120 real 1m7.055s 00:18:36.120 user 2m24.581s 00:18:36.120 sys 0m3.799s 00:18:36.120 05:15:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.120 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.120 05:15:55 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:36.120 05:15:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:36.120 05:15:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:36.120 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.120 ************************************ 00:18:36.120 START TEST ftl_bdevperf 00:18:36.120 ************************************ 00:18:36.120 05:15:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:18:36.120 * Looking for test storage... 00:18:36.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.120 05:15:55 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.120 05:15:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:36.120 05:15:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.120 05:15:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.120 05:15:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.120 05:15:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.120 05:15:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.120 05:15:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.120 05:15:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.120 05:15:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.120 05:15:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.120 05:15:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.120 05:15:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.120 05:15:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.120 05:15:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.120 05:15:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.120 05:15:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.120 05:15:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.120 05:15:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.120 05:15:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.120 05:15:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.120 05:15:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.121 05:15:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.121 05:15:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.121 05:15:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.121 05:15:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.121 05:15:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.121 05:15:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.121 05:15:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@13 -- # use_append= 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:36.121 05:15:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:36.121 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=72851 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@22 -- # waitforlisten 72851 00:18:36.121 05:15:55 -- common/autotest_common.sh@819 -- # '[' -z 72851 ']' 00:18:36.121 05:15:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.121 05:15:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:36.121 05:15:55 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:36.121 05:15:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.121 05:15:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:36.121 05:15:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.396 [2024-07-26 05:15:55.283521] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:36.396 [2024-07-26 05:15:55.283644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72851 ] 00:18:36.396 [2024-07-26 05:15:55.452317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.961 [2024-07-26 05:15:55.811433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.895 05:15:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:37.895 05:15:56 -- common/autotest_common.sh@852 -- # return 0 00:18:37.895 05:15:56 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:37.895 05:15:56 -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.895 05:15:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:37.895 05:15:56 -- ftl/common.sh@56 -- # local size=103424 00:18:37.895 05:15:56 -- ftl/common.sh@59 -- # local base_bdev 00:18:37.895 05:15:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:38.152 05:15:57 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:38.152 05:15:57 -- ftl/common.sh@62 -- # local base_size 00:18:38.152 05:15:57 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:38.152 05:15:57 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:18:38.152 05:15:57 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:38.152 05:15:57 -- common/autotest_common.sh@1359 -- # local bs 00:18:38.152 05:15:57 -- common/autotest_common.sh@1360 -- # local nb 00:18:38.410 05:15:57 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:38.410 05:15:57 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:38.410 { 00:18:38.410 "name": "nvme0n1", 00:18:38.410 "aliases": [ 00:18:38.410 "9065c9cb-2011-4f42-848e-c321d5337e0d" 00:18:38.410 ], 00:18:38.410 "product_name": "NVMe disk", 00:18:38.410 "block_size": 4096, 00:18:38.410 "num_blocks": 1310720, 00:18:38.410 "uuid": "9065c9cb-2011-4f42-848e-c321d5337e0d", 00:18:38.410 "assigned_rate_limits": { 00:18:38.410 "rw_ios_per_sec": 0, 00:18:38.410 "rw_mbytes_per_sec": 0, 00:18:38.410 "r_mbytes_per_sec": 0, 00:18:38.410 "w_mbytes_per_sec": 0 00:18:38.410 }, 00:18:38.410 "claimed": true, 00:18:38.410 "claim_type": "read_many_write_one", 00:18:38.410 "zoned": false, 00:18:38.410 "supported_io_types": { 00:18:38.410 "read": true, 00:18:38.410 "write": true, 00:18:38.410 "unmap": true, 00:18:38.410 "write_zeroes": true, 00:18:38.410 "flush": true, 00:18:38.410 "reset": true, 00:18:38.410 "compare": true, 00:18:38.410 "compare_and_write": false, 00:18:38.410 "abort": true, 00:18:38.410 "nvme_admin": true, 00:18:38.410 "nvme_io": true 00:18:38.410 }, 00:18:38.410 "driver_specific": { 00:18:38.410 "nvme": [ 00:18:38.410 { 00:18:38.410 "pci_address": "0000:00:07.0", 00:18:38.410 "trid": { 00:18:38.410 "trtype": "PCIe", 00:18:38.410 "traddr": "0000:00:07.0" 00:18:38.410 }, 00:18:38.410 "ctrlr_data": { 00:18:38.410 "cntlid": 0, 00:18:38.410 "vendor_id": "0x1b36", 00:18:38.410 "model_number": "QEMU NVMe Ctrl", 00:18:38.410 "serial_number": "12341", 00:18:38.411 "firmware_revision": "8.0.0", 00:18:38.411 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:38.411 "oacs": { 00:18:38.411 "security": 0, 00:18:38.411 "format": 1, 00:18:38.411 "firmware": 0, 00:18:38.411 "ns_manage": 1 00:18:38.411 }, 00:18:38.411 "multi_ctrlr": false, 00:18:38.411 "ana_reporting": false 00:18:38.411 }, 00:18:38.411 "vs": { 00:18:38.411 "nvme_version": "1.4" 00:18:38.411 }, 00:18:38.411 "ns_data": { 00:18:38.411 "id": 1, 00:18:38.411 "can_share": false 00:18:38.411 } 00:18:38.411 } 00:18:38.411 ], 00:18:38.411 "mp_policy": "active_passive" 00:18:38.411 } 00:18:38.411 } 00:18:38.411 ]' 00:18:38.411 05:15:57 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:38.668 05:15:57 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:38.668 05:15:57 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:38.669 05:15:57 -- common/autotest_common.sh@1363 -- # nb=1310720 00:18:38.669 05:15:57 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:18:38.669 05:15:57 -- common/autotest_common.sh@1367 -- # echo 5120 00:18:38.669 05:15:57 -- ftl/common.sh@63 -- # base_size=5120 00:18:38.669 05:15:57 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:38.669 05:15:57 -- ftl/common.sh@67 -- # clear_lvols 00:18:38.669 05:15:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.669 05:15:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:38.926 05:15:57 -- ftl/common.sh@28 -- # stores=586468da-d51d-4fb7-91f6-0cd94747fd2d 00:18:38.926 05:15:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.926 05:15:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 586468da-d51d-4fb7-91f6-0cd94747fd2d 00:18:38.926 05:15:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:39.183 05:15:58 -- ftl/common.sh@68 -- # lvs=e672e71a-1ec5-4f38-8d29-e69d3b3fab14 00:18:39.183 05:15:58 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e672e71a-1ec5-4f38-8d29-e69d3b3fab14 00:18:39.442 05:15:58 -- ftl/bdevperf.sh@23 -- # split_bdev=fd280296-fb0a-4276-be0a-96436194b450 00:18:39.442 05:15:58 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 fd280296-fb0a-4276-be0a-96436194b450 00:18:39.442 05:15:58 -- ftl/common.sh@35 -- # local name=nvc0 00:18:39.442 05:15:58 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:39.442 05:15:58 -- ftl/common.sh@37 -- # local base_bdev=fd280296-fb0a-4276-be0a-96436194b450 00:18:39.442 05:15:58 -- ftl/common.sh@38 -- # local cache_size= 00:18:39.442 05:15:58 -- ftl/common.sh@41 -- # get_bdev_size fd280296-fb0a-4276-be0a-96436194b450 00:18:39.442 05:15:58 -- common/autotest_common.sh@1357 -- # local bdev_name=fd280296-fb0a-4276-be0a-96436194b450 00:18:39.442 05:15:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:39.442 05:15:58 -- common/autotest_common.sh@1359 -- # local bs 00:18:39.442 05:15:58 -- common/autotest_common.sh@1360 -- # local nb 00:18:39.442 05:15:58 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd280296-fb0a-4276-be0a-96436194b450 00:18:39.700 05:15:58 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:39.700 { 00:18:39.700 "name": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:39.700 "aliases": [ 00:18:39.700 "lvs/nvme0n1p0" 00:18:39.700 ], 00:18:39.700 "product_name": "Logical Volume", 00:18:39.701 "block_size": 4096, 00:18:39.701 "num_blocks": 26476544, 00:18:39.701 "uuid": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:39.701 "assigned_rate_limits": { 00:18:39.701 "rw_ios_per_sec": 0, 00:18:39.701 "rw_mbytes_per_sec": 0, 00:18:39.701 "r_mbytes_per_sec": 0, 00:18:39.701 "w_mbytes_per_sec": 0 00:18:39.701 }, 00:18:39.701 "claimed": false, 00:18:39.701 "zoned": false, 00:18:39.701 "supported_io_types": { 00:18:39.701 "read": true, 00:18:39.701 "write": true, 00:18:39.701 "unmap": true, 00:18:39.701 "write_zeroes": true, 00:18:39.701 "flush": false, 00:18:39.701 "reset": true, 00:18:39.701 "compare": false, 00:18:39.701 "compare_and_write": false, 00:18:39.701 "abort": false, 00:18:39.701 "nvme_admin": false, 00:18:39.701 "nvme_io": false 00:18:39.701 }, 00:18:39.701 "driver_specific": { 00:18:39.701 "lvol": { 00:18:39.701 "lvol_store_uuid": "e672e71a-1ec5-4f38-8d29-e69d3b3fab14", 00:18:39.701 "base_bdev": "nvme0n1", 00:18:39.701 "thin_provision": true, 00:18:39.701 "snapshot": false, 00:18:39.701 "clone": false, 00:18:39.701 "esnap_clone": false 00:18:39.701 } 00:18:39.701 } 00:18:39.701 } 00:18:39.701 ]' 00:18:39.701 05:15:58 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:39.701 05:15:58 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:39.701 05:15:58 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:39.701 05:15:58 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:39.701 05:15:58 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:39.701 05:15:58 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:39.701 05:15:58 -- ftl/common.sh@41 -- # local base_size=5171 00:18:39.701 05:15:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:39.701 05:15:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:39.960 05:15:58 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.960 05:15:58 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.960 05:15:58 -- ftl/common.sh@48 -- # get_bdev_size fd280296-fb0a-4276-be0a-96436194b450 00:18:39.960 05:15:58 -- common/autotest_common.sh@1357 -- # local bdev_name=fd280296-fb0a-4276-be0a-96436194b450 00:18:39.960 05:15:58 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:39.960 05:15:58 -- common/autotest_common.sh@1359 -- # local bs 00:18:39.960 05:15:58 -- common/autotest_common.sh@1360 -- # local nb 00:18:39.960 05:15:58 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd280296-fb0a-4276-be0a-96436194b450 00:18:39.960 05:15:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:39.960 { 00:18:39.960 "name": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:39.960 "aliases": [ 00:18:39.960 "lvs/nvme0n1p0" 00:18:39.960 ], 00:18:39.960 "product_name": "Logical Volume", 00:18:39.960 "block_size": 4096, 00:18:39.960 "num_blocks": 26476544, 00:18:39.960 "uuid": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:39.960 "assigned_rate_limits": { 00:18:39.960 "rw_ios_per_sec": 0, 00:18:39.960 "rw_mbytes_per_sec": 0, 00:18:39.960 "r_mbytes_per_sec": 0, 00:18:39.960 "w_mbytes_per_sec": 0 00:18:39.960 }, 00:18:39.960 "claimed": false, 00:18:39.960 "zoned": false, 00:18:39.960 "supported_io_types": { 00:18:39.960 "read": true, 00:18:39.960 "write": true, 00:18:39.960 "unmap": true, 00:18:39.960 "write_zeroes": true, 00:18:39.960 "flush": false, 00:18:39.960 "reset": true, 00:18:39.960 "compare": false, 00:18:39.960 "compare_and_write": false, 00:18:39.960 "abort": false, 00:18:39.960 "nvme_admin": false, 00:18:39.960 "nvme_io": false 00:18:39.960 }, 00:18:39.960 "driver_specific": { 00:18:39.960 "lvol": { 00:18:39.960 "lvol_store_uuid": "e672e71a-1ec5-4f38-8d29-e69d3b3fab14", 00:18:39.960 "base_bdev": "nvme0n1", 00:18:39.960 "thin_provision": true, 00:18:39.960 "snapshot": false, 00:18:39.960 "clone": false, 00:18:39.960 "esnap_clone": false 00:18:39.960 } 00:18:39.960 } 00:18:39.960 } 00:18:39.960 ]' 00:18:39.960 05:15:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:40.219 05:15:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:40.219 05:15:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:40.219 05:15:59 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:40.219 05:15:59 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:40.219 05:15:59 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:40.219 05:15:59 -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.219 05:15:59 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:40.219 05:15:59 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:40.219 05:15:59 -- ftl/bdevperf.sh@26 -- # get_bdev_size fd280296-fb0a-4276-be0a-96436194b450 00:18:40.219 05:15:59 -- common/autotest_common.sh@1357 -- # local bdev_name=fd280296-fb0a-4276-be0a-96436194b450 00:18:40.219 05:15:59 -- common/autotest_common.sh@1358 -- # local bdev_info 00:18:40.219 05:15:59 -- common/autotest_common.sh@1359 -- # local bs 00:18:40.219 05:15:59 -- common/autotest_common.sh@1360 -- # local nb 00:18:40.219 05:15:59 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd280296-fb0a-4276-be0a-96436194b450 00:18:40.479 05:15:59 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:18:40.479 { 00:18:40.479 "name": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:40.479 "aliases": [ 00:18:40.479 "lvs/nvme0n1p0" 00:18:40.479 ], 00:18:40.479 "product_name": "Logical Volume", 00:18:40.479 "block_size": 4096, 00:18:40.479 "num_blocks": 26476544, 00:18:40.479 "uuid": "fd280296-fb0a-4276-be0a-96436194b450", 00:18:40.479 "assigned_rate_limits": { 00:18:40.479 "rw_ios_per_sec": 0, 00:18:40.479 "rw_mbytes_per_sec": 0, 00:18:40.479 "r_mbytes_per_sec": 0, 00:18:40.479 "w_mbytes_per_sec": 0 00:18:40.479 }, 00:18:40.479 "claimed": false, 00:18:40.479 "zoned": false, 00:18:40.479 "supported_io_types": { 00:18:40.479 "read": true, 00:18:40.479 "write": true, 00:18:40.479 "unmap": true, 00:18:40.479 "write_zeroes": true, 00:18:40.479 "flush": false, 00:18:40.479 "reset": true, 00:18:40.479 "compare": false, 00:18:40.479 "compare_and_write": false, 00:18:40.479 "abort": false, 00:18:40.479 "nvme_admin": false, 00:18:40.479 "nvme_io": false 00:18:40.479 }, 00:18:40.479 "driver_specific": { 00:18:40.479 "lvol": { 00:18:40.479 "lvol_store_uuid": "e672e71a-1ec5-4f38-8d29-e69d3b3fab14", 00:18:40.479 "base_bdev": "nvme0n1", 00:18:40.479 "thin_provision": true, 00:18:40.479 "snapshot": false, 00:18:40.479 "clone": false, 00:18:40.479 "esnap_clone": false 00:18:40.479 } 00:18:40.479 } 00:18:40.479 } 00:18:40.479 ]' 00:18:40.479 05:15:59 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:18:40.479 05:15:59 -- common/autotest_common.sh@1362 -- # bs=4096 00:18:40.479 05:15:59 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:18:40.739 05:15:59 -- common/autotest_common.sh@1363 -- # nb=26476544 00:18:40.739 05:15:59 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:18:40.739 05:15:59 -- common/autotest_common.sh@1367 -- # echo 103424 00:18:40.739 05:15:59 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:40.739 05:15:59 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fd280296-fb0a-4276-be0a-96436194b450 -c nvc0n1p0 --l2p_dram_limit 20 00:18:40.739 [2024-07-26 05:15:59.751639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.739 [2024-07-26 05:15:59.751715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.739 [2024-07-26 05:15:59.751738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.739 [2024-07-26 05:15:59.751751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.739 [2024-07-26 05:15:59.751818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.739 [2024-07-26 05:15:59.751833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.739 [2024-07-26 05:15:59.751849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:40.739 [2024-07-26 05:15:59.751861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.739 [2024-07-26 05:15:59.751887] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.739 [2024-07-26 05:15:59.753117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.739 [2024-07-26 05:15:59.753168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.753181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.740 [2024-07-26 05:15:59.753198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:18:40.740 [2024-07-26 05:15:59.753222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.753317] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 95ac5582-ff1b-4940-b154-9382f0483ceb 00:18:40.740 [2024-07-26 05:15:59.755808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.755851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.740 [2024-07-26 05:15:59.755865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:40.740 [2024-07-26 05:15:59.755881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.770909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.770947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.740 [2024-07-26 05:15:59.770962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.952 ms 00:18:40.740 [2024-07-26 05:15:59.770983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.771095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.771115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.740 [2024-07-26 05:15:59.771128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:40.740 [2024-07-26 05:15:59.771148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.771237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.771257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.740 [2024-07-26 05:15:59.771270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:40.740 [2024-07-26 05:15:59.771287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.771325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.740 [2024-07-26 05:15:59.778429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.778470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.740 [2024-07-26 05:15:59.778499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.113 ms 00:18:40.740 [2024-07-26 05:15:59.778515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.778588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.778603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.740 [2024-07-26 05:15:59.778644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:40.740 [2024-07-26 05:15:59.778656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.778695] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.740 [2024-07-26 05:15:59.778828] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:40.740 [2024-07-26 05:15:59.778855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.740 [2024-07-26 05:15:59.778871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:40.740 [2024-07-26 05:15:59.778889] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.740 [2024-07-26 05:15:59.778904] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779141] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.740 [2024-07-26 05:15:59.779155] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.740 [2024-07-26 05:15:59.779172] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:40.740 [2024-07-26 05:15:59.779184] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:40.740 [2024-07-26 05:15:59.779206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.779218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.740 [2024-07-26 05:15:59.779234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:18:40.740 [2024-07-26 05:15:59.779246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.779330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.740 [2024-07-26 05:15:59.779343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.740 [2024-07-26 05:15:59.779361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:40.740 [2024-07-26 05:15:59.779372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.740 [2024-07-26 05:15:59.779444] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.740 [2024-07-26 05:15:59.779462] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.740 [2024-07-26 05:15:59.779479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.740 [2024-07-26 05:15:59.779517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.740 [2024-07-26 05:15:59.779590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.740 [2024-07-26 05:15:59.779619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.740 [2024-07-26 05:15:59.779630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.740 [2024-07-26 05:15:59.779645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.740 [2024-07-26 05:15:59.779657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.740 [2024-07-26 05:15:59.779672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:40.740 [2024-07-26 05:15:59.779683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.740 [2024-07-26 05:15:59.779716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:40.740 [2024-07-26 05:15:59.779731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:40.740 [2024-07-26 05:15:59.779758] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:40.740 [2024-07-26 05:15:59.779769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.740 [2024-07-26 05:15:59.779795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.740 [2024-07-26 05:15:59.779837] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.740 [2024-07-26 05:15:59.779875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.740 [2024-07-26 05:15:59.779919] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.740 [2024-07-26 05:15:59.779945] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.740 [2024-07-26 05:15:59.779955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.740 [2024-07-26 05:15:59.779971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.740 [2024-07-26 05:15:59.779982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.740 [2024-07-26 05:15:59.779997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:40.740 [2024-07-26 05:15:59.780008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.740 [2024-07-26 05:15:59.780022] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.740 [2024-07-26 05:15:59.780034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.740 [2024-07-26 05:15:59.780050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.740 [2024-07-26 05:15:59.780061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.740 [2024-07-26 05:15:59.780077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.740 [2024-07-26 05:15:59.780088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.740 [2024-07-26 05:15:59.780103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.740 [2024-07-26 05:15:59.780114] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.740 [2024-07-26 05:15:59.780132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.740 [2024-07-26 05:15:59.780146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.740 [2024-07-26 05:15:59.780162] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.741 [2024-07-26 05:15:59.780177] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.741 [2024-07-26 05:15:59.780195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.741 [2024-07-26 05:15:59.780208] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:40.741 [2024-07-26 05:15:59.780238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:40.741 [2024-07-26 05:15:59.780252] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:40.741 [2024-07-26 05:15:59.780268] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:40.741 [2024-07-26 05:15:59.780281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:40.741 [2024-07-26 05:15:59.780298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:40.741 [2024-07-26 05:15:59.780310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:40.741 [2024-07-26 05:15:59.780326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:40.741 [2024-07-26 05:15:59.780339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:40.741 [2024-07-26 05:15:59.780354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:40.741 [2024-07-26 05:15:59.780367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:40.741 [2024-07-26 05:15:59.780388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:40.741 [2024-07-26 05:15:59.780401] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.741 [2024-07-26 05:15:59.780419] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.741 [2024-07-26 05:15:59.780436] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.741 [2024-07-26 05:15:59.780453] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.741 [2024-07-26 05:15:59.780465] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.741 [2024-07-26 05:15:59.780481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.741 [2024-07-26 05:15:59.780494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.741 [2024-07-26 05:15:59.780510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.741 [2024-07-26 05:15:59.780522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:18:40.741 [2024-07-26 05:15:59.780537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.741 [2024-07-26 05:15:59.811785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.741 [2024-07-26 05:15:59.811828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.741 [2024-07-26 05:15:59.811844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.204 ms 00:18:40.741 [2024-07-26 05:15:59.811859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.741 [2024-07-26 05:15:59.811945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.741 [2024-07-26 05:15:59.811963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.741 [2024-07-26 05:15:59.811976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:40.741 [2024-07-26 05:15:59.811990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.881258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.881302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.000 [2024-07-26 05:15:59.881318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.215 ms 00:18:41.000 [2024-07-26 05:15:59.881335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.881370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.881386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.000 [2024-07-26 05:15:59.881399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:41.000 [2024-07-26 05:15:59.881419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.882277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.882305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.000 [2024-07-26 05:15:59.882319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:18:41.000 [2024-07-26 05:15:59.882335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.882451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.882474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.000 [2024-07-26 05:15:59.882487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:41.000 [2024-07-26 05:15:59.882502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.910609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.910652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.000 [2024-07-26 05:15:59.910667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.082 ms 00:18:41.000 [2024-07-26 05:15:59.910683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:15:59.926822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:41.000 [2024-07-26 05:15:59.937100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:15:59.937139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.000 [2024-07-26 05:15:59.937160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.305 ms 00:18:41.000 [2024-07-26 05:15:59.937173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:16:00.041971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.000 [2024-07-26 05:16:00.042056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:41.000 [2024-07-26 05:16:00.042082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.730 ms 00:18:41.000 [2024-07-26 05:16:00.042097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.000 [2024-07-26 05:16:00.042162] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:41.000 [2024-07-26 05:16:00.042180] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:47.568 [2024-07-26 05:16:06.180959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.568 [2024-07-26 05:16:06.181051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:47.568 [2024-07-26 05:16:06.181078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6138.753 ms 00:18:47.568 [2024-07-26 05:16:06.181092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.568 [2024-07-26 05:16:06.181352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.568 [2024-07-26 05:16:06.181374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:47.568 [2024-07-26 05:16:06.181392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:18:47.568 [2024-07-26 05:16:06.181405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.568 [2024-07-26 05:16:06.217685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.568 [2024-07-26 05:16:06.217732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:47.568 [2024-07-26 05:16:06.217752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.216 ms 00:18:47.569 [2024-07-26 05:16:06.217765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.253970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.254012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:47.569 [2024-07-26 05:16:06.254037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.149 ms 00:18:47.569 [2024-07-26 05:16:06.254049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.254514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.254531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:47.569 [2024-07-26 05:16:06.254547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:47.569 [2024-07-26 05:16:06.254563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.348503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.348553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:47.569 [2024-07-26 05:16:06.348574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.881 ms 00:18:47.569 [2024-07-26 05:16:06.348587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.387916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.387960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:47.569 [2024-07-26 05:16:06.387981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.276 ms 00:18:47.569 [2024-07-26 05:16:06.387994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.391074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.391110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:47.569 [2024-07-26 05:16:06.391131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:18:47.569 [2024-07-26 05:16:06.391143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.428757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.428806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:47.569 [2024-07-26 05:16:06.428828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.550 ms 00:18:47.569 [2024-07-26 05:16:06.428841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.428899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.428913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:47.569 [2024-07-26 05:16:06.428931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:47.569 [2024-07-26 05:16:06.428945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.429082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.569 [2024-07-26 05:16:06.429097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:47.569 [2024-07-26 05:16:06.429113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:47.569 [2024-07-26 05:16:06.429126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.569 [2024-07-26 05:16:06.430831] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6678.502 ms, result 0 00:18:47.569 { 00:18:47.569 "name": "ftl0", 00:18:47.569 "uuid": "95ac5582-ff1b-4940-b154-9382f0483ceb" 00:18:47.569 } 00:18:47.569 05:16:06 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:47.569 05:16:06 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:47.569 05:16:06 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:47.828 05:16:06 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:47.828 [2024-07-26 05:16:06.799160] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:47.828 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:47.828 Zero copy mechanism will not be used. 00:18:47.828 Running I/O for 4 seconds... 00:18:52.034 00:18:52.034 Latency(us) 00:18:52.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.034 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:52.034 ftl0 : 4.00 1633.78 108.49 0.00 0.00 642.56 214.55 1185.89 00:18:52.034 =================================================================================================================== 00:18:52.034 Total : 1633.78 108.49 0.00 0.00 642.56 214.55 1185.89 00:18:52.034 [2024-07-26 05:16:10.811268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:52.034 0 00:18:52.034 05:16:10 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:52.034 [2024-07-26 05:16:10.960527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:52.034 Running I/O for 4 seconds... 00:18:56.243 00:18:56.243 Latency(us) 00:18:56.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.243 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.243 ftl0 : 4.02 9188.90 35.89 0.00 0.00 13893.56 255.51 47435.58 00:18:56.243 =================================================================================================================== 00:18:56.243 Total : 9188.90 35.89 0.00 0.00 13893.56 0.00 47435.58 00:18:56.243 [2024-07-26 05:16:14.992243] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:56.243 0 00:18:56.243 05:16:15 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:56.243 [2024-07-26 05:16:15.120467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:56.243 Running I/O for 4 seconds... 00:19:00.432 00:19:00.432 Latency(us) 00:19:00.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.432 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:00.432 Verification LBA range: start 0x0 length 0x1400000 00:19:00.432 ftl0 : 4.01 12802.54 50.01 0.00 0.00 9972.52 165.79 19848.05 00:19:00.432 =================================================================================================================== 00:19:00.432 Total : 12802.54 50.01 0.00 0.00 9972.52 0.00 19848.05 00:19:00.432 [2024-07-26 05:16:19.149717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:00.432 0 00:19:00.432 05:16:19 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:00.432 [2024-07-26 05:16:19.393387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.432 [2024-07-26 05:16:19.393439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:00.432 [2024-07-26 05:16:19.393459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:00.432 [2024-07-26 05:16:19.393469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.432 [2024-07-26 05:16:19.393500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:00.432 [2024-07-26 05:16:19.396866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.432 [2024-07-26 05:16:19.396896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:00.432 [2024-07-26 05:16:19.396908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:19:00.432 [2024-07-26 05:16:19.396927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.432 [2024-07-26 05:16:19.398805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.432 [2024-07-26 05:16:19.398850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:00.432 [2024-07-26 05:16:19.398863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:19:00.432 [2024-07-26 05:16:19.398876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.568551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.568607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:00.692 [2024-07-26 05:16:19.568626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 169.654 ms 00:19:00.692 [2024-07-26 05:16:19.568642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.573922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.573958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:00.692 [2024-07-26 05:16:19.573971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.237 ms 00:19:00.692 [2024-07-26 05:16:19.573984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.612077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.612117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:00.692 [2024-07-26 05:16:19.612130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.035 ms 00:19:00.692 [2024-07-26 05:16:19.612145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.635251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.635291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:00.692 [2024-07-26 05:16:19.635306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.068 ms 00:19:00.692 [2024-07-26 05:16:19.635318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.635449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.635466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:00.692 [2024-07-26 05:16:19.635480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:00.692 [2024-07-26 05:16:19.635493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.673384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.673422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:00.692 [2024-07-26 05:16:19.673436] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.875 ms 00:19:00.692 [2024-07-26 05:16:19.673448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.710653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.710691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:00.692 [2024-07-26 05:16:19.710704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.167 ms 00:19:00.692 [2024-07-26 05:16:19.710719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.747683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.747719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:00.692 [2024-07-26 05:16:19.747748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.926 ms 00:19:00.692 [2024-07-26 05:16:19.747760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.785459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.692 [2024-07-26 05:16:19.785499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:00.692 [2024-07-26 05:16:19.785512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.616 ms 00:19:00.692 [2024-07-26 05:16:19.785524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.692 [2024-07-26 05:16:19.785560] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:00.692 [2024-07-26 05:16:19.785580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.785989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:00.692 [2024-07-26 05:16:19.786137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:00.693 [2024-07-26 05:16:19.786852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:00.693 [2024-07-26 05:16:19.786862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 95ac5582-ff1b-4940-b154-9382f0483ceb 00:19:00.693 [2024-07-26 05:16:19.786878] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:00.693 [2024-07-26 05:16:19.786888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:00.693 [2024-07-26 05:16:19.786900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:00.693 [2024-07-26 05:16:19.786911] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:00.693 [2024-07-26 05:16:19.786923] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:00.693 [2024-07-26 05:16:19.786934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:00.693 [2024-07-26 05:16:19.786946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:00.693 [2024-07-26 05:16:19.786955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:00.693 [2024-07-26 05:16:19.786967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:00.693 [2024-07-26 05:16:19.786977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.693 [2024-07-26 05:16:19.786993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:00.693 [2024-07-26 05:16:19.787004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:19:00.693 [2024-07-26 05:16:19.787016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.806961] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.953 [2024-07-26 05:16:19.806999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:00.953 [2024-07-26 05:16:19.807028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.898 ms 00:19:00.953 [2024-07-26 05:16:19.807044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.807319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.953 [2024-07-26 05:16:19.807334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:00.953 [2024-07-26 05:16:19.807346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:00.953 [2024-07-26 05:16:19.807358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.864920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:19.864959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:00.953 [2024-07-26 05:16:19.864972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:19.864986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.865048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:19.865061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:00.953 [2024-07-26 05:16:19.865072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:19.865084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.865153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:19.865170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:00.953 [2024-07-26 05:16:19.865181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:19.865196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.865228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:19.865270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:00.953 [2024-07-26 05:16:19.865280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:19.865293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:19.980777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:19.980836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:00.953 [2024-07-26 05:16:19.980867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:19.980880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.028740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.028794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:00.953 [2024-07-26 05:16:20.028810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.028825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.028916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.028933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.953 [2024-07-26 05:16:20.028945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.028962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.029027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.953 [2024-07-26 05:16:20.029041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.029055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.029195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.953 [2024-07-26 05:16:20.029206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.029235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.029324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:00.953 [2024-07-26 05:16:20.029335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.029352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.029408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.953 [2024-07-26 05:16:20.029420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.029436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:00.953 [2024-07-26 05:16:20.029497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.953 [2024-07-26 05:16:20.029512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:00.953 [2024-07-26 05:16:20.029526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.953 [2024-07-26 05:16:20.029658] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 636.230 ms, result 0 00:19:00.953 true 00:19:00.953 05:16:20 -- ftl/bdevperf.sh@37 -- # killprocess 72851 00:19:00.953 05:16:20 -- common/autotest_common.sh@926 -- # '[' -z 72851 ']' 00:19:00.953 05:16:20 -- common/autotest_common.sh@930 -- # kill -0 72851 00:19:00.953 05:16:20 -- common/autotest_common.sh@931 -- # uname 00:19:01.212 05:16:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:01.212 05:16:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72851 00:19:01.212 killing process with pid 72851 00:19:01.213 Received shutdown signal, test time was about 4.000000 seconds 00:19:01.213 00:19:01.213 Latency(us) 00:19:01.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.213 =================================================================================================================== 00:19:01.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.213 05:16:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:01.213 05:16:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:01.213 05:16:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72851' 00:19:01.213 05:16:20 -- common/autotest_common.sh@945 -- # kill 72851 00:19:01.213 05:16:20 -- common/autotest_common.sh@950 -- # wait 72851 00:19:02.592 05:16:21 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:02.592 05:16:21 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:02.592 05:16:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.592 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:02.592 Remove shared memory files 00:19:02.592 05:16:21 -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:02.592 05:16:21 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:02.592 05:16:21 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:02.592 05:16:21 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:02.592 05:16:21 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:02.592 05:16:21 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:02.592 05:16:21 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:02.592 ************************************ 00:19:02.592 END TEST ftl_bdevperf 00:19:02.592 ************************************ 00:19:02.592 00:19:02.592 real 0m26.359s 00:19:02.592 user 0m28.962s 00:19:02.592 sys 0m1.358s 00:19:02.592 05:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.592 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:02.592 05:16:21 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:02.592 05:16:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:02.592 05:16:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:02.592 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:02.592 ************************************ 00:19:02.592 START TEST ftl_trim 00:19:02.592 ************************************ 00:19:02.592 05:16:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:19:02.592 * Looking for test storage... 00:19:02.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.592 05:16:21 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:02.592 05:16:21 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:02.592 05:16:21 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.592 05:16:21 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.592 05:16:21 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:02.592 05:16:21 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:02.592 05:16:21 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.592 05:16:21 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:02.592 05:16:21 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:02.592 05:16:21 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.592 05:16:21 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.592 05:16:21 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:02.592 05:16:21 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:02.592 05:16:21 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.592 05:16:21 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.592 05:16:21 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:02.592 05:16:21 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:02.592 05:16:21 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.592 05:16:21 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.592 05:16:21 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:02.592 05:16:21 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:02.592 05:16:21 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.592 05:16:21 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.592 05:16:21 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.592 05:16:21 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.592 05:16:21 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:02.592 05:16:21 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:02.592 05:16:21 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.592 05:16:21 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.592 05:16:21 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.592 05:16:21 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:19:02.592 05:16:21 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:19:02.592 05:16:21 -- ftl/trim.sh@25 -- # timeout=240 00:19:02.592 05:16:21 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:02.592 05:16:21 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:02.592 05:16:21 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:02.592 05:16:21 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:02.592 05:16:21 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:02.592 05:16:21 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.592 05:16:21 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.592 05:16:21 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.592 05:16:21 -- ftl/trim.sh@40 -- # svcpid=73250 00:19:02.592 05:16:21 -- ftl/trim.sh@41 -- # waitforlisten 73250 00:19:02.592 05:16:21 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:02.592 05:16:21 -- common/autotest_common.sh@819 -- # '[' -z 73250 ']' 00:19:02.592 05:16:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.592 05:16:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:02.592 05:16:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.592 05:16:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:02.592 05:16:21 -- common/autotest_common.sh@10 -- # set +x 00:19:02.852 [2024-07-26 05:16:21.724812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:02.852 [2024-07-26 05:16:21.725196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73250 ] 00:19:02.852 [2024-07-26 05:16:21.906718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.111 [2024-07-26 05:16:22.137348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:03.111 [2024-07-26 05:16:22.137985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.111 [2024-07-26 05:16:22.138072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.111 [2024-07-26 05:16:22.138103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.486 05:16:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:04.486 05:16:23 -- common/autotest_common.sh@852 -- # return 0 00:19:04.486 05:16:23 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:19:04.486 05:16:23 -- ftl/common.sh@54 -- # local name=nvme0 00:19:04.486 05:16:23 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:19:04.486 05:16:23 -- ftl/common.sh@56 -- # local size=103424 00:19:04.486 05:16:23 -- ftl/common.sh@59 -- # local base_bdev 00:19:04.486 05:16:23 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:19:04.486 05:16:23 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:04.486 05:16:23 -- ftl/common.sh@62 -- # local base_size 00:19:04.486 05:16:23 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:04.486 05:16:23 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:19:04.486 05:16:23 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:04.486 05:16:23 -- common/autotest_common.sh@1359 -- # local bs 00:19:04.486 05:16:23 -- common/autotest_common.sh@1360 -- # local nb 00:19:04.486 05:16:23 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:04.744 05:16:23 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:04.744 { 00:19:04.744 "name": "nvme0n1", 00:19:04.744 "aliases": [ 00:19:04.744 "37371a83-32cd-487a-8744-7ec80026ac74" 00:19:04.744 ], 00:19:04.744 "product_name": "NVMe disk", 00:19:04.744 "block_size": 4096, 00:19:04.744 "num_blocks": 1310720, 00:19:04.744 "uuid": "37371a83-32cd-487a-8744-7ec80026ac74", 00:19:04.744 "assigned_rate_limits": { 00:19:04.744 "rw_ios_per_sec": 0, 00:19:04.744 "rw_mbytes_per_sec": 0, 00:19:04.744 "r_mbytes_per_sec": 0, 00:19:04.744 "w_mbytes_per_sec": 0 00:19:04.744 }, 00:19:04.744 "claimed": true, 00:19:04.744 "claim_type": "read_many_write_one", 00:19:04.744 "zoned": false, 00:19:04.744 "supported_io_types": { 00:19:04.744 "read": true, 00:19:04.744 "write": true, 00:19:04.744 "unmap": true, 00:19:04.744 "write_zeroes": true, 00:19:04.744 "flush": true, 00:19:04.744 "reset": true, 00:19:04.744 "compare": true, 00:19:04.744 "compare_and_write": false, 00:19:04.744 "abort": true, 00:19:04.744 "nvme_admin": true, 00:19:04.744 "nvme_io": true 00:19:04.744 }, 00:19:04.744 "driver_specific": { 00:19:04.744 "nvme": [ 00:19:04.744 { 00:19:04.744 "pci_address": "0000:00:07.0", 00:19:04.744 "trid": { 00:19:04.744 "trtype": "PCIe", 00:19:04.744 "traddr": "0000:00:07.0" 00:19:04.744 }, 00:19:04.744 "ctrlr_data": { 00:19:04.744 "cntlid": 0, 00:19:04.744 "vendor_id": "0x1b36", 00:19:04.744 "model_number": "QEMU NVMe Ctrl", 00:19:04.744 "serial_number": "12341", 00:19:04.744 "firmware_revision": "8.0.0", 00:19:04.744 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:04.744 "oacs": { 00:19:04.744 "security": 0, 00:19:04.744 "format": 1, 00:19:04.744 "firmware": 0, 00:19:04.744 "ns_manage": 1 00:19:04.744 }, 00:19:04.744 "multi_ctrlr": false, 00:19:04.744 "ana_reporting": false 00:19:04.744 }, 00:19:04.744 "vs": { 00:19:04.744 "nvme_version": "1.4" 00:19:04.744 }, 00:19:04.744 "ns_data": { 00:19:04.745 "id": 1, 00:19:04.745 "can_share": false 00:19:04.745 } 00:19:04.745 } 00:19:04.745 ], 00:19:04.745 "mp_policy": "active_passive" 00:19:04.745 } 00:19:04.745 } 00:19:04.745 ]' 00:19:04.745 05:16:23 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:04.745 05:16:23 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:04.745 05:16:23 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:05.002 05:16:23 -- common/autotest_common.sh@1363 -- # nb=1310720 00:19:05.003 05:16:23 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:19:05.003 05:16:23 -- common/autotest_common.sh@1367 -- # echo 5120 00:19:05.003 05:16:23 -- ftl/common.sh@63 -- # base_size=5120 00:19:05.003 05:16:23 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:05.003 05:16:23 -- ftl/common.sh@67 -- # clear_lvols 00:19:05.003 05:16:23 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:05.003 05:16:23 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:05.003 05:16:24 -- ftl/common.sh@28 -- # stores=e672e71a-1ec5-4f38-8d29-e69d3b3fab14 00:19:05.003 05:16:24 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:05.003 05:16:24 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e672e71a-1ec5-4f38-8d29-e69d3b3fab14 00:19:05.260 05:16:24 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:05.519 05:16:24 -- ftl/common.sh@68 -- # lvs=7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb 00:19:05.519 05:16:24 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb 00:19:05.778 05:16:24 -- ftl/trim.sh@43 -- # split_bdev=10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:05.778 05:16:24 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:05.778 05:16:24 -- ftl/common.sh@35 -- # local name=nvc0 00:19:05.778 05:16:24 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:19:05.778 05:16:24 -- ftl/common.sh@37 -- # local base_bdev=10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:05.778 05:16:24 -- ftl/common.sh@38 -- # local cache_size= 00:19:05.778 05:16:24 -- ftl/common.sh@41 -- # get_bdev_size 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:05.778 05:16:24 -- common/autotest_common.sh@1357 -- # local bdev_name=10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:05.778 05:16:24 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:05.778 05:16:24 -- common/autotest_common.sh@1359 -- # local bs 00:19:05.778 05:16:24 -- common/autotest_common.sh@1360 -- # local nb 00:19:05.778 05:16:24 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.036 05:16:24 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:06.036 { 00:19:06.036 "name": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:06.036 "aliases": [ 00:19:06.036 "lvs/nvme0n1p0" 00:19:06.036 ], 00:19:06.036 "product_name": "Logical Volume", 00:19:06.036 "block_size": 4096, 00:19:06.037 "num_blocks": 26476544, 00:19:06.037 "uuid": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:06.037 "assigned_rate_limits": { 00:19:06.037 "rw_ios_per_sec": 0, 00:19:06.037 "rw_mbytes_per_sec": 0, 00:19:06.037 "r_mbytes_per_sec": 0, 00:19:06.037 "w_mbytes_per_sec": 0 00:19:06.037 }, 00:19:06.037 "claimed": false, 00:19:06.037 "zoned": false, 00:19:06.037 "supported_io_types": { 00:19:06.037 "read": true, 00:19:06.037 "write": true, 00:19:06.037 "unmap": true, 00:19:06.037 "write_zeroes": true, 00:19:06.037 "flush": false, 00:19:06.037 "reset": true, 00:19:06.037 "compare": false, 00:19:06.037 "compare_and_write": false, 00:19:06.037 "abort": false, 00:19:06.037 "nvme_admin": false, 00:19:06.037 "nvme_io": false 00:19:06.037 }, 00:19:06.037 "driver_specific": { 00:19:06.037 "lvol": { 00:19:06.037 "lvol_store_uuid": "7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb", 00:19:06.037 "base_bdev": "nvme0n1", 00:19:06.037 "thin_provision": true, 00:19:06.037 "snapshot": false, 00:19:06.037 "clone": false, 00:19:06.037 "esnap_clone": false 00:19:06.037 } 00:19:06.037 } 00:19:06.037 } 00:19:06.037 ]' 00:19:06.037 05:16:24 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:06.037 05:16:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:06.037 05:16:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:06.037 05:16:25 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:06.037 05:16:25 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:06.037 05:16:25 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:06.037 05:16:25 -- ftl/common.sh@41 -- # local base_size=5171 00:19:06.037 05:16:25 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:06.037 05:16:25 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:19:06.296 05:16:25 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:06.296 05:16:25 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:06.296 05:16:25 -- ftl/common.sh@48 -- # get_bdev_size 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.296 05:16:25 -- common/autotest_common.sh@1357 -- # local bdev_name=10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.296 05:16:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:06.296 05:16:25 -- common/autotest_common.sh@1359 -- # local bs 00:19:06.296 05:16:25 -- common/autotest_common.sh@1360 -- # local nb 00:19:06.296 05:16:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.555 05:16:25 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:06.555 { 00:19:06.555 "name": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:06.555 "aliases": [ 00:19:06.555 "lvs/nvme0n1p0" 00:19:06.555 ], 00:19:06.555 "product_name": "Logical Volume", 00:19:06.555 "block_size": 4096, 00:19:06.555 "num_blocks": 26476544, 00:19:06.555 "uuid": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:06.555 "assigned_rate_limits": { 00:19:06.555 "rw_ios_per_sec": 0, 00:19:06.555 "rw_mbytes_per_sec": 0, 00:19:06.555 "r_mbytes_per_sec": 0, 00:19:06.555 "w_mbytes_per_sec": 0 00:19:06.555 }, 00:19:06.555 "claimed": false, 00:19:06.555 "zoned": false, 00:19:06.555 "supported_io_types": { 00:19:06.555 "read": true, 00:19:06.555 "write": true, 00:19:06.555 "unmap": true, 00:19:06.555 "write_zeroes": true, 00:19:06.555 "flush": false, 00:19:06.555 "reset": true, 00:19:06.555 "compare": false, 00:19:06.555 "compare_and_write": false, 00:19:06.555 "abort": false, 00:19:06.555 "nvme_admin": false, 00:19:06.555 "nvme_io": false 00:19:06.555 }, 00:19:06.555 "driver_specific": { 00:19:06.555 "lvol": { 00:19:06.555 "lvol_store_uuid": "7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb", 00:19:06.555 "base_bdev": "nvme0n1", 00:19:06.555 "thin_provision": true, 00:19:06.555 "snapshot": false, 00:19:06.555 "clone": false, 00:19:06.555 "esnap_clone": false 00:19:06.555 } 00:19:06.555 } 00:19:06.555 } 00:19:06.555 ]' 00:19:06.555 05:16:25 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:06.555 05:16:25 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:06.555 05:16:25 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:06.555 05:16:25 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:06.555 05:16:25 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:06.555 05:16:25 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:06.814 05:16:25 -- ftl/common.sh@48 -- # cache_size=5171 00:19:06.814 05:16:25 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:06.814 05:16:25 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:06.814 05:16:25 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:06.814 05:16:25 -- ftl/trim.sh@47 -- # get_bdev_size 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.814 05:16:25 -- common/autotest_common.sh@1357 -- # local bdev_name=10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:06.814 05:16:25 -- common/autotest_common.sh@1358 -- # local bdev_info 00:19:06.814 05:16:25 -- common/autotest_common.sh@1359 -- # local bs 00:19:06.814 05:16:25 -- common/autotest_common.sh@1360 -- # local nb 00:19:06.814 05:16:25 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 00:19:07.074 05:16:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:19:07.074 { 00:19:07.074 "name": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:07.074 "aliases": [ 00:19:07.074 "lvs/nvme0n1p0" 00:19:07.074 ], 00:19:07.074 "product_name": "Logical Volume", 00:19:07.074 "block_size": 4096, 00:19:07.074 "num_blocks": 26476544, 00:19:07.074 "uuid": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:07.074 "assigned_rate_limits": { 00:19:07.074 "rw_ios_per_sec": 0, 00:19:07.074 "rw_mbytes_per_sec": 0, 00:19:07.074 "r_mbytes_per_sec": 0, 00:19:07.074 "w_mbytes_per_sec": 0 00:19:07.074 }, 00:19:07.074 "claimed": false, 00:19:07.074 "zoned": false, 00:19:07.074 "supported_io_types": { 00:19:07.074 "read": true, 00:19:07.074 "write": true, 00:19:07.074 "unmap": true, 00:19:07.074 "write_zeroes": true, 00:19:07.074 "flush": false, 00:19:07.074 "reset": true, 00:19:07.074 "compare": false, 00:19:07.074 "compare_and_write": false, 00:19:07.074 "abort": false, 00:19:07.074 "nvme_admin": false, 00:19:07.074 "nvme_io": false 00:19:07.074 }, 00:19:07.074 "driver_specific": { 00:19:07.074 "lvol": { 00:19:07.074 "lvol_store_uuid": "7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb", 00:19:07.074 "base_bdev": "nvme0n1", 00:19:07.074 "thin_provision": true, 00:19:07.074 "snapshot": false, 00:19:07.074 "clone": false, 00:19:07.074 "esnap_clone": false 00:19:07.074 } 00:19:07.074 } 00:19:07.074 } 00:19:07.074 ]' 00:19:07.074 05:16:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:19:07.074 05:16:26 -- common/autotest_common.sh@1362 -- # bs=4096 00:19:07.074 05:16:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:19:07.074 05:16:26 -- common/autotest_common.sh@1363 -- # nb=26476544 00:19:07.074 05:16:26 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:19:07.074 05:16:26 -- common/autotest_common.sh@1367 -- # echo 103424 00:19:07.074 05:16:26 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:07.074 05:16:26 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 10dddc4e-0808-49f1-a3b4-7df3ab8f9993 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:07.333 [2024-07-26 05:16:26.316687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.333 [2024-07-26 05:16:26.316734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.333 [2024-07-26 05:16:26.316758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:07.333 [2024-07-26 05:16:26.316769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.333 [2024-07-26 05:16:26.320166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.333 [2024-07-26 05:16:26.320214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.333 [2024-07-26 05:16:26.320230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.358 ms 00:19:07.333 [2024-07-26 05:16:26.320240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.333 [2024-07-26 05:16:26.320389] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.333 [2024-07-26 05:16:26.321578] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.333 [2024-07-26 05:16:26.321616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.333 [2024-07-26 05:16:26.321628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.333 [2024-07-26 05:16:26.321641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:19:07.333 [2024-07-26 05:16:26.321651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.333 [2024-07-26 05:16:26.321768] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:07.333 [2024-07-26 05:16:26.323200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.333 [2024-07-26 05:16:26.323247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:07.334 [2024-07-26 05:16:26.323260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:07.334 [2024-07-26 05:16:26.323272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.330977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.331011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.334 [2024-07-26 05:16:26.331024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.600 ms 00:19:07.334 [2024-07-26 05:16:26.331036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.331198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.331235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.334 [2024-07-26 05:16:26.331263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:07.334 [2024-07-26 05:16:26.331279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.331331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.331344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.334 [2024-07-26 05:16:26.331357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:07.334 [2024-07-26 05:16:26.331370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.331411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:07.334 [2024-07-26 05:16:26.337467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.337498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.334 [2024-07-26 05:16:26.337512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.060 ms 00:19:07.334 [2024-07-26 05:16:26.337522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.337600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.337612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.334 [2024-07-26 05:16:26.337625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:07.334 [2024-07-26 05:16:26.337635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.337679] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:07.334 [2024-07-26 05:16:26.337805] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:07.334 [2024-07-26 05:16:26.337825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.334 [2024-07-26 05:16:26.337838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:07.334 [2024-07-26 05:16:26.337854] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.334 [2024-07-26 05:16:26.337867] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.334 [2024-07-26 05:16:26.337880] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:07.334 [2024-07-26 05:16:26.337890] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.334 [2024-07-26 05:16:26.337906] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:07.334 [2024-07-26 05:16:26.337917] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:07.334 [2024-07-26 05:16:26.337930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.337940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.334 [2024-07-26 05:16:26.337952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:19:07.334 [2024-07-26 05:16:26.337963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.338045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.334 [2024-07-26 05:16:26.338056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.334 [2024-07-26 05:16:26.338068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:07.334 [2024-07-26 05:16:26.338078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.334 [2024-07-26 05:16:26.338221] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.334 [2024-07-26 05:16:26.338235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.334 [2024-07-26 05:16:26.338248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338259] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.334 [2024-07-26 05:16:26.338281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.334 [2024-07-26 05:16:26.338314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.334 [2024-07-26 05:16:26.338336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.334 [2024-07-26 05:16:26.338346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:07.334 [2024-07-26 05:16:26.338359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.334 [2024-07-26 05:16:26.338369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.334 [2024-07-26 05:16:26.338381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:07.334 [2024-07-26 05:16:26.338390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.334 [2024-07-26 05:16:26.338413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:07.334 [2024-07-26 05:16:26.338424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:07.334 [2024-07-26 05:16:26.338445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:07.334 [2024-07-26 05:16:26.338454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.334 [2024-07-26 05:16:26.338481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338503] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.334 [2024-07-26 05:16:26.338515] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.334 [2024-07-26 05:16:26.338546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.334 [2024-07-26 05:16:26.338580] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.334 [2024-07-26 05:16:26.338610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.334 [2024-07-26 05:16:26.338630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.334 [2024-07-26 05:16:26.338643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:07.334 [2024-07-26 05:16:26.338652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.334 [2024-07-26 05:16:26.338663] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.334 [2024-07-26 05:16:26.338673] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.334 [2024-07-26 05:16:26.338685] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.334 [2024-07-26 05:16:26.338707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.334 [2024-07-26 05:16:26.338717] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.334 [2024-07-26 05:16:26.338728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.334 [2024-07-26 05:16:26.338738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.334 [2024-07-26 05:16:26.338751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.334 [2024-07-26 05:16:26.338760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.334 [2024-07-26 05:16:26.338773] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.334 [2024-07-26 05:16:26.338786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.334 [2024-07-26 05:16:26.338802] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:07.334 [2024-07-26 05:16:26.338813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:07.334 [2024-07-26 05:16:26.338826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:07.334 [2024-07-26 05:16:26.338836] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:07.334 [2024-07-26 05:16:26.338850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:07.334 [2024-07-26 05:16:26.338861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:07.334 [2024-07-26 05:16:26.338873] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:07.335 [2024-07-26 05:16:26.338884] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:07.335 [2024-07-26 05:16:26.338897] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:07.335 [2024-07-26 05:16:26.338907] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:07.335 [2024-07-26 05:16:26.338926] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:07.335 [2024-07-26 05:16:26.338937] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:07.335 [2024-07-26 05:16:26.338953] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:07.335 [2024-07-26 05:16:26.338963] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.335 [2024-07-26 05:16:26.338977] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.335 [2024-07-26 05:16:26.338988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.335 [2024-07-26 05:16:26.339000] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.335 [2024-07-26 05:16:26.339010] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.335 [2024-07-26 05:16:26.339022] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.335 [2024-07-26 05:16:26.339033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.339045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.335 [2024-07-26 05:16:26.339055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:19:07.335 [2024-07-26 05:16:26.339068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.364704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.364741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.335 [2024-07-26 05:16:26.364757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.539 ms 00:19:07.335 [2024-07-26 05:16:26.364769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.364903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.364919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.335 [2024-07-26 05:16:26.364931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:07.335 [2024-07-26 05:16:26.364942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.419809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.419850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.335 [2024-07-26 05:16:26.419865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.818 ms 00:19:07.335 [2024-07-26 05:16:26.419878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.419969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.420002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.335 [2024-07-26 05:16:26.420013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.335 [2024-07-26 05:16:26.420026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.420494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.420516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.335 [2024-07-26 05:16:26.420527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:19:07.335 [2024-07-26 05:16:26.420539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.335 [2024-07-26 05:16:26.420655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.335 [2024-07-26 05:16:26.420671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.335 [2024-07-26 05:16:26.420682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:07.335 [2024-07-26 05:16:26.420693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.594 [2024-07-26 05:16:26.454637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.594 [2024-07-26 05:16:26.454686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.594 [2024-07-26 05:16:26.454703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.898 ms 00:19:07.594 [2024-07-26 05:16:26.454719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.594 [2024-07-26 05:16:26.468915] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:07.594 [2024-07-26 05:16:26.485663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.594 [2024-07-26 05:16:26.485722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.594 [2024-07-26 05:16:26.485756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.772 ms 00:19:07.594 [2024-07-26 05:16:26.485767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.594 [2024-07-26 05:16:26.598412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.594 [2024-07-26 05:16:26.598478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:07.594 [2024-07-26 05:16:26.598513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.519 ms 00:19:07.594 [2024-07-26 05:16:26.598524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.594 [2024-07-26 05:16:26.598641] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:07.594 [2024-07-26 05:16:26.598657] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:11.796 [2024-07-26 05:16:30.304112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.304176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:11.796 [2024-07-26 05:16:30.304212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3705.446 ms 00:19:11.796 [2024-07-26 05:16:30.304236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.304486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.304502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:11.796 [2024-07-26 05:16:30.304516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:19:11.796 [2024-07-26 05:16:30.304529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.341502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.341544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:11.796 [2024-07-26 05:16:30.341562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.922 ms 00:19:11.796 [2024-07-26 05:16:30.341573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.379187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.379242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:11.796 [2024-07-26 05:16:30.379279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.520 ms 00:19:11.796 [2024-07-26 05:16:30.379288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.379773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.379793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:11.796 [2024-07-26 05:16:30.379807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:19:11.796 [2024-07-26 05:16:30.379817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.477165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.477217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:11.796 [2024-07-26 05:16:30.477235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.297 ms 00:19:11.796 [2024-07-26 05:16:30.477245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.516602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.516640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:11.796 [2024-07-26 05:16:30.516676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.241 ms 00:19:11.796 [2024-07-26 05:16:30.516687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.521693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.521726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:11.796 [2024-07-26 05:16:30.521744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.914 ms 00:19:11.796 [2024-07-26 05:16:30.521754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.560066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.560103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:11.796 [2024-07-26 05:16:30.560120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.248 ms 00:19:11.796 [2024-07-26 05:16:30.560130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.560262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.560275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:11.796 [2024-07-26 05:16:30.560289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:11.796 [2024-07-26 05:16:30.560299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.560398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.796 [2024-07-26 05:16:30.560412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:11.796 [2024-07-26 05:16:30.560425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:11.796 [2024-07-26 05:16:30.560435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.796 [2024-07-26 05:16:30.561376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:11.796 { 00:19:11.796 "name": "ftl0", 00:19:11.796 "uuid": "0bdf932e-f5a4-44d7-9d31-9e46b2e97560" 00:19:11.796 } 00:19:11.796 [2024-07-26 05:16:30.566767] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4244.362 ms, result 0 00:19:11.796 [2024-07-26 05:16:30.567710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.796 05:16:30 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:11.796 05:16:30 -- common/autotest_common.sh@887 -- # local bdev_name=ftl0 00:19:11.796 05:16:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:11.796 05:16:30 -- common/autotest_common.sh@889 -- # local i 00:19:11.796 05:16:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:11.796 05:16:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:11.796 05:16:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:11.796 05:16:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:12.056 [ 00:19:12.056 { 00:19:12.056 "name": "ftl0", 00:19:12.056 "aliases": [ 00:19:12.056 "0bdf932e-f5a4-44d7-9d31-9e46b2e97560" 00:19:12.056 ], 00:19:12.056 "product_name": "FTL disk", 00:19:12.056 "block_size": 4096, 00:19:12.056 "num_blocks": 23592960, 00:19:12.056 "uuid": "0bdf932e-f5a4-44d7-9d31-9e46b2e97560", 00:19:12.056 "assigned_rate_limits": { 00:19:12.056 "rw_ios_per_sec": 0, 00:19:12.056 "rw_mbytes_per_sec": 0, 00:19:12.056 "r_mbytes_per_sec": 0, 00:19:12.056 "w_mbytes_per_sec": 0 00:19:12.056 }, 00:19:12.056 "claimed": false, 00:19:12.056 "zoned": false, 00:19:12.056 "supported_io_types": { 00:19:12.056 "read": true, 00:19:12.056 "write": true, 00:19:12.056 "unmap": true, 00:19:12.056 "write_zeroes": true, 00:19:12.056 "flush": true, 00:19:12.056 "reset": false, 00:19:12.056 "compare": false, 00:19:12.056 "compare_and_write": false, 00:19:12.056 "abort": false, 00:19:12.056 "nvme_admin": false, 00:19:12.056 "nvme_io": false 00:19:12.056 }, 00:19:12.056 "driver_specific": { 00:19:12.056 "ftl": { 00:19:12.056 "base_bdev": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:12.056 "cache": "nvc0n1p0" 00:19:12.056 } 00:19:12.056 } 00:19:12.056 } 00:19:12.056 ] 00:19:12.056 05:16:31 -- common/autotest_common.sh@895 -- # return 0 00:19:12.056 05:16:31 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:12.056 05:16:31 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:12.315 05:16:31 -- ftl/trim.sh@56 -- # echo ']}' 00:19:12.315 05:16:31 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:12.575 05:16:31 -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:12.575 { 00:19:12.575 "name": "ftl0", 00:19:12.575 "aliases": [ 00:19:12.575 "0bdf932e-f5a4-44d7-9d31-9e46b2e97560" 00:19:12.575 ], 00:19:12.575 "product_name": "FTL disk", 00:19:12.575 "block_size": 4096, 00:19:12.575 "num_blocks": 23592960, 00:19:12.575 "uuid": "0bdf932e-f5a4-44d7-9d31-9e46b2e97560", 00:19:12.575 "assigned_rate_limits": { 00:19:12.575 "rw_ios_per_sec": 0, 00:19:12.575 "rw_mbytes_per_sec": 0, 00:19:12.575 "r_mbytes_per_sec": 0, 00:19:12.575 "w_mbytes_per_sec": 0 00:19:12.575 }, 00:19:12.575 "claimed": false, 00:19:12.575 "zoned": false, 00:19:12.575 "supported_io_types": { 00:19:12.575 "read": true, 00:19:12.575 "write": true, 00:19:12.575 "unmap": true, 00:19:12.575 "write_zeroes": true, 00:19:12.575 "flush": true, 00:19:12.575 "reset": false, 00:19:12.575 "compare": false, 00:19:12.575 "compare_and_write": false, 00:19:12.575 "abort": false, 00:19:12.575 "nvme_admin": false, 00:19:12.575 "nvme_io": false 00:19:12.575 }, 00:19:12.575 "driver_specific": { 00:19:12.575 "ftl": { 00:19:12.575 "base_bdev": "10dddc4e-0808-49f1-a3b4-7df3ab8f9993", 00:19:12.575 "cache": "nvc0n1p0" 00:19:12.575 } 00:19:12.575 } 00:19:12.575 } 00:19:12.575 ]' 00:19:12.575 05:16:31 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:12.575 05:16:31 -- ftl/trim.sh@60 -- # nb=23592960 00:19:12.575 05:16:31 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:12.575 [2024-07-26 05:16:31.630513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.630565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:12.575 [2024-07-26 05:16:31.630581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.575 [2024-07-26 05:16:31.630594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.575 [2024-07-26 05:16:31.630636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:12.575 [2024-07-26 05:16:31.634230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.634262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:12.575 [2024-07-26 05:16:31.634278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:19:12.575 [2024-07-26 05:16:31.634288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.575 [2024-07-26 05:16:31.635072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.635099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:12.575 [2024-07-26 05:16:31.635115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:19:12.575 [2024-07-26 05:16:31.635125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.575 [2024-07-26 05:16:31.638047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.638071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:12.575 [2024-07-26 05:16:31.638088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:19:12.575 [2024-07-26 05:16:31.638098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.575 [2024-07-26 05:16:31.643871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.643904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:12.575 [2024-07-26 05:16:31.643917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:19:12.575 [2024-07-26 05:16:31.643943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.575 [2024-07-26 05:16:31.682225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.575 [2024-07-26 05:16:31.682261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:12.575 [2024-07-26 05:16:31.682279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.172 ms 00:19:12.575 [2024-07-26 05:16:31.682288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.706338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.706381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:12.836 [2024-07-26 05:16:31.706398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.943 ms 00:19:12.836 [2024-07-26 05:16:31.706409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.706686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.706700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:12.836 [2024-07-26 05:16:31.706716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:12.836 [2024-07-26 05:16:31.706729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.745406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.745444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:12.836 [2024-07-26 05:16:31.745461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.635 ms 00:19:12.836 [2024-07-26 05:16:31.745471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.784040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.784075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:12.836 [2024-07-26 05:16:31.784092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.484 ms 00:19:12.836 [2024-07-26 05:16:31.784101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.822391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.822427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:12.836 [2024-07-26 05:16:31.822443] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.199 ms 00:19:12.836 [2024-07-26 05:16:31.822469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.859507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.836 [2024-07-26 05:16:31.859540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:12.836 [2024-07-26 05:16:31.859575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.873 ms 00:19:12.836 [2024-07-26 05:16:31.859584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.836 [2024-07-26 05:16:31.859674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:12.836 [2024-07-26 05:16:31.859692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.859994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:12.836 [2024-07-26 05:16:31.860498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:12.837 [2024-07-26 05:16:31.860971] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:12.837 [2024-07-26 05:16:31.860984] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:12.837 [2024-07-26 05:16:31.860994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:12.837 [2024-07-26 05:16:31.861006] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:12.837 [2024-07-26 05:16:31.861016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:12.837 [2024-07-26 05:16:31.861028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:12.837 [2024-07-26 05:16:31.861038] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:12.837 [2024-07-26 05:16:31.861051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:12.837 [2024-07-26 05:16:31.861061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:12.837 [2024-07-26 05:16:31.861075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:12.837 [2024-07-26 05:16:31.861084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:12.837 [2024-07-26 05:16:31.861096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.837 [2024-07-26 05:16:31.861106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:12.837 [2024-07-26 05:16:31.861119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:19:12.837 [2024-07-26 05:16:31.861132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.837 [2024-07-26 05:16:31.881272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.837 [2024-07-26 05:16:31.881298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:12.837 [2024-07-26 05:16:31.881313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.073 ms 00:19:12.837 [2024-07-26 05:16:31.881323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.837 [2024-07-26 05:16:31.881615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.837 [2024-07-26 05:16:31.881630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:12.837 [2024-07-26 05:16:31.881643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:19:12.837 [2024-07-26 05:16:31.881653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:31.949588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:31.949630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.097 [2024-07-26 05:16:31.949649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:31.949660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:31.949775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:31.949791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.097 [2024-07-26 05:16:31.949804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:31.949814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:31.949887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:31.949900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.097 [2024-07-26 05:16:31.949913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:31.949923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:31.949965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:31.949976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.097 [2024-07-26 05:16:31.949989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:31.950001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.086547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.086613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.097 [2024-07-26 05:16:32.086648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.086658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.097 [2024-07-26 05:16:32.133150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.097 [2024-07-26 05:16:32.133343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.097 [2024-07-26 05:16:32.133477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.097 [2024-07-26 05:16:32.133674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.097 [2024-07-26 05:16:32.133801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.097 [2024-07-26 05:16:32.133904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.133913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.133986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.097 [2024-07-26 05:16:32.133997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.097 [2024-07-26 05:16:32.134009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.097 [2024-07-26 05:16:32.134019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.097 [2024-07-26 05:16:32.134245] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.693 ms, result 0 00:19:13.097 true 00:19:13.097 05:16:32 -- ftl/trim.sh@63 -- # killprocess 73250 00:19:13.097 05:16:32 -- common/autotest_common.sh@926 -- # '[' -z 73250 ']' 00:19:13.097 05:16:32 -- common/autotest_common.sh@930 -- # kill -0 73250 00:19:13.097 05:16:32 -- common/autotest_common.sh@931 -- # uname 00:19:13.097 05:16:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:13.097 05:16:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73250 00:19:13.097 killing process with pid 73250 00:19:13.097 05:16:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:13.097 05:16:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:13.097 05:16:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73250' 00:19:13.097 05:16:32 -- common/autotest_common.sh@945 -- # kill 73250 00:19:13.097 05:16:32 -- common/autotest_common.sh@950 -- # wait 73250 00:19:18.368 05:16:36 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:18.936 65536+0 records in 00:19:18.936 65536+0 records out 00:19:18.936 268435456 bytes (268 MB, 256 MiB) copied, 1.00032 s, 268 MB/s 00:19:18.936 05:16:37 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.936 [2024-07-26 05:16:37.984344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:18.937 [2024-07-26 05:16:37.984449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73484 ] 00:19:19.196 [2024-07-26 05:16:38.142140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.454 [2024-07-26 05:16:38.361351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.713 [2024-07-26 05:16:38.758162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.713 [2024-07-26 05:16:38.758242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.972 [2024-07-26 05:16:38.915744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.915789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:19.972 [2024-07-26 05:16:38.915803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:19.972 [2024-07-26 05:16:38.915832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.919200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.919251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.972 [2024-07-26 05:16:38.919281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.348 ms 00:19:19.972 [2024-07-26 05:16:38.919295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.919390] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:19.972 [2024-07-26 05:16:38.920558] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:19.972 [2024-07-26 05:16:38.920591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.920606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.972 [2024-07-26 05:16:38.920617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.209 ms 00:19:19.972 [2024-07-26 05:16:38.920627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.922120] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:19.972 [2024-07-26 05:16:38.941160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.941197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:19.972 [2024-07-26 05:16:38.941238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.041 ms 00:19:19.972 [2024-07-26 05:16:38.941249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.941371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.941386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:19.972 [2024-07-26 05:16:38.941401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:19.972 [2024-07-26 05:16:38.941411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.948125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.948150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.972 [2024-07-26 05:16:38.948161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.670 ms 00:19:19.972 [2024-07-26 05:16:38.948170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.948278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.948295] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.972 [2024-07-26 05:16:38.948307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:19.972 [2024-07-26 05:16:38.948316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.948345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.948355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.972 [2024-07-26 05:16:38.948365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:19.972 [2024-07-26 05:16:38.948374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.948399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:19.972 [2024-07-26 05:16:38.954344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.954489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.972 [2024-07-26 05:16:38.954621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.952 ms 00:19:19.972 [2024-07-26 05:16:38.954658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.954755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.972 [2024-07-26 05:16:38.954795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.972 [2024-07-26 05:16:38.954888] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:19.972 [2024-07-26 05:16:38.954923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.972 [2024-07-26 05:16:38.954973] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.972 [2024-07-26 05:16:38.955019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:19.972 [2024-07-26 05:16:38.955142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.972 [2024-07-26 05:16:38.955269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:19.972 [2024-07-26 05:16:38.955425] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:19.972 [2024-07-26 05:16:38.955481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.972 [2024-07-26 05:16:38.955576] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:19.972 [2024-07-26 05:16:38.955631] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.972 [2024-07-26 05:16:38.955680] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.972 [2024-07-26 05:16:38.955767] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:19.972 [2024-07-26 05:16:38.955799] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.973 [2024-07-26 05:16:38.955873] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:19.973 [2024-07-26 05:16:38.955888] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:19.973 [2024-07-26 05:16:38.955900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:38.955923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.973 [2024-07-26 05:16:38.955935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:19:19.973 [2024-07-26 05:16:38.955944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:38.956014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:38.956025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.973 [2024-07-26 05:16:38.956035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:19.973 [2024-07-26 05:16:38.956045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:38.956114] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.973 [2024-07-26 05:16:38.956126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.973 [2024-07-26 05:16:38.956137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.973 [2024-07-26 05:16:38.956169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.973 [2024-07-26 05:16:38.956198] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956224] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.973 [2024-07-26 05:16:38.956234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.973 [2024-07-26 05:16:38.956244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:19.973 [2024-07-26 05:16:38.956253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.973 [2024-07-26 05:16:38.956262] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.973 [2024-07-26 05:16:38.956271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:19.973 [2024-07-26 05:16:38.956280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956290] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.973 [2024-07-26 05:16:38.956300] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:19.973 [2024-07-26 05:16:38.956309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956327] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:19.973 [2024-07-26 05:16:38.956337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:19.973 [2024-07-26 05:16:38.956349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956359] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.973 [2024-07-26 05:16:38.956368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956386] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.973 [2024-07-26 05:16:38.956396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.973 [2024-07-26 05:16:38.956423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956441] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.973 [2024-07-26 05:16:38.956451] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956469] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.973 [2024-07-26 05:16:38.956478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.973 [2024-07-26 05:16:38.956496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.973 [2024-07-26 05:16:38.956506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:19.973 [2024-07-26 05:16:38.956515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.973 [2024-07-26 05:16:38.956523] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.973 [2024-07-26 05:16:38.956533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.973 [2024-07-26 05:16:38.956542] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.973 [2024-07-26 05:16:38.956561] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.973 [2024-07-26 05:16:38.956571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.973 [2024-07-26 05:16:38.956580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.973 [2024-07-26 05:16:38.956589] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.973 [2024-07-26 05:16:38.956598] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.973 [2024-07-26 05:16:38.956607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.973 [2024-07-26 05:16:38.956617] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.973 [2024-07-26 05:16:38.956635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.973 [2024-07-26 05:16:38.956647] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:19.973 [2024-07-26 05:16:38.956659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:19.973 [2024-07-26 05:16:38.956669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:19.973 [2024-07-26 05:16:38.956679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:19.973 [2024-07-26 05:16:38.956689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:19.973 [2024-07-26 05:16:38.956699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:19.973 [2024-07-26 05:16:38.956710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:19.973 [2024-07-26 05:16:38.956720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:19.973 [2024-07-26 05:16:38.956730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:19.973 [2024-07-26 05:16:38.956741] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:19.973 [2024-07-26 05:16:38.956751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:19.973 [2024-07-26 05:16:38.956761] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:19.973 [2024-07-26 05:16:38.956771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:19.973 [2024-07-26 05:16:38.956781] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.973 [2024-07-26 05:16:38.956792] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.973 [2024-07-26 05:16:38.956803] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.973 [2024-07-26 05:16:38.956813] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.973 [2024-07-26 05:16:38.956823] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.973 [2024-07-26 05:16:38.956835] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.973 [2024-07-26 05:16:38.956845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:38.956858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.973 [2024-07-26 05:16:38.956868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:19:19.973 [2024-07-26 05:16:38.956878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:38.981537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:38.981683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.973 [2024-07-26 05:16:38.981873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.613 ms 00:19:19.973 [2024-07-26 05:16:38.981910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:38.982047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:38.982153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.973 [2024-07-26 05:16:38.982217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:19.973 [2024-07-26 05:16:38.982250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:39.044242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:39.044386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.973 [2024-07-26 05:16:39.044516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.948 ms 00:19:19.973 [2024-07-26 05:16:39.044554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.973 [2024-07-26 05:16:39.044639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.973 [2024-07-26 05:16:39.044674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.974 [2024-07-26 05:16:39.044705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:19.974 [2024-07-26 05:16:39.044789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.974 [2024-07-26 05:16:39.045289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.974 [2024-07-26 05:16:39.045333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.974 [2024-07-26 05:16:39.045411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:19.974 [2024-07-26 05:16:39.045445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.974 [2024-07-26 05:16:39.045668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.974 [2024-07-26 05:16:39.045705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.974 [2024-07-26 05:16:39.045735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:19.974 [2024-07-26 05:16:39.045746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.974 [2024-07-26 05:16:39.068465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.974 [2024-07-26 05:16:39.068608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.974 [2024-07-26 05:16:39.068777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.694 ms 00:19:19.974 [2024-07-26 05:16:39.068816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.088486] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:20.233 [2024-07-26 05:16:39.088522] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.233 [2024-07-26 05:16:39.088540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.088550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.233 [2024-07-26 05:16:39.088561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.590 ms 00:19:20.233 [2024-07-26 05:16:39.088570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.118298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.118334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.233 [2024-07-26 05:16:39.118347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.656 ms 00:19:20.233 [2024-07-26 05:16:39.118373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.136445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.136478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.233 [2024-07-26 05:16:39.136490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.989 ms 00:19:20.233 [2024-07-26 05:16:39.136499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.154624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.154656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.233 [2024-07-26 05:16:39.154679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.056 ms 00:19:20.233 [2024-07-26 05:16:39.154688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.155115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.155128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.233 [2024-07-26 05:16:39.155139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:19:20.233 [2024-07-26 05:16:39.155148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.246280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.246339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.233 [2024-07-26 05:16:39.246356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.108 ms 00:19:20.233 [2024-07-26 05:16:39.246366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.258867] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.233 [2024-07-26 05:16:39.275023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.275083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.233 [2024-07-26 05:16:39.275098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.556 ms 00:19:20.233 [2024-07-26 05:16:39.275109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.275242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.275273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.233 [2024-07-26 05:16:39.275284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:20.233 [2024-07-26 05:16:39.275294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.275350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.275362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.233 [2024-07-26 05:16:39.275372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:20.233 [2024-07-26 05:16:39.275385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.277561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.277590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:20.233 [2024-07-26 05:16:39.277601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:19:20.233 [2024-07-26 05:16:39.277611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.277645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.277655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.233 [2024-07-26 05:16:39.277666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.233 [2024-07-26 05:16:39.277675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.277715] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.233 [2024-07-26 05:16:39.277727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.277736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.233 [2024-07-26 05:16:39.277746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:20.233 [2024-07-26 05:16:39.277755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.314685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.314721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.233 [2024-07-26 05:16:39.314734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.905 ms 00:19:20.233 [2024-07-26 05:16:39.314754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.314855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.233 [2024-07-26 05:16:39.314867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.233 [2024-07-26 05:16:39.314878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:20.233 [2024-07-26 05:16:39.314887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.233 [2024-07-26 05:16:39.315886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.233 [2024-07-26 05:16:39.321118] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.791 ms, result 0 00:19:20.233 [2024-07-26 05:16:39.322033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.233 [2024-07-26 05:16:39.340783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:29.572  Copying: 27/256 [MB] (27 MBps) Copying: 54/256 [MB] (27 MBps) Copying: 82/256 [MB] (27 MBps) Copying: 109/256 [MB] (27 MBps) Copying: 136/256 [MB] (27 MBps) Copying: 162/256 [MB] (26 MBps) Copying: 190/256 [MB] (27 MBps) Copying: 218/256 [MB] (28 MBps) Copying: 247/256 [MB] (29 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-26 05:16:48.632437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:29.572 [2024-07-26 05:16:48.647115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.572 [2024-07-26 05:16:48.647153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:29.572 [2024-07-26 05:16:48.647168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:29.572 [2024-07-26 05:16:48.647193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.572 [2024-07-26 05:16:48.647248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:29.572 [2024-07-26 05:16:48.650880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.572 [2024-07-26 05:16:48.650906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:29.572 [2024-07-26 05:16:48.650918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.615 ms 00:19:29.572 [2024-07-26 05:16:48.650926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.572 [2024-07-26 05:16:48.652791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.572 [2024-07-26 05:16:48.652828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:29.572 [2024-07-26 05:16:48.652841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:19:29.572 [2024-07-26 05:16:48.652851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.572 [2024-07-26 05:16:48.659711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.572 [2024-07-26 05:16:48.659745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.572 [2024-07-26 05:16:48.659763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.841 ms 00:19:29.572 [2024-07-26 05:16:48.659773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.572 [2024-07-26 05:16:48.665388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.572 [2024-07-26 05:16:48.665419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:29.572 [2024-07-26 05:16:48.665431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.552 ms 00:19:29.572 [2024-07-26 05:16:48.665440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.703249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.703283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.832 [2024-07-26 05:16:48.703296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.764 ms 00:19:29.832 [2024-07-26 05:16:48.703320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.724085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.724118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.832 [2024-07-26 05:16:48.724131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.698 ms 00:19:29.832 [2024-07-26 05:16:48.724145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.724315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.724329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.832 [2024-07-26 05:16:48.724340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:29.832 [2024-07-26 05:16:48.724349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.763041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.763076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:29.832 [2024-07-26 05:16:48.763089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.657 ms 00:19:29.832 [2024-07-26 05:16:48.763111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.800599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.800631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:29.832 [2024-07-26 05:16:48.800644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.420 ms 00:19:29.832 [2024-07-26 05:16:48.800653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.839503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.839643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.832 [2024-07-26 05:16:48.839785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.596 ms 00:19:29.832 [2024-07-26 05:16:48.839824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.875892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.832 [2024-07-26 05:16:48.876036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.832 [2024-07-26 05:16:48.876167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.955 ms 00:19:29.832 [2024-07-26 05:16:48.876203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.832 [2024-07-26 05:16:48.876303] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.832 [2024-07-26 05:16:48.876353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.876986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.877034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.832 [2024-07-26 05:16:48.877080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.877991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.878990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.833 [2024-07-26 05:16:48.879159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.834 [2024-07-26 05:16:48.879176] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.834 [2024-07-26 05:16:48.879187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:29.834 [2024-07-26 05:16:48.879219] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.834 [2024-07-26 05:16:48.879230] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.834 [2024-07-26 05:16:48.879239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.834 [2024-07-26 05:16:48.879250] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.834 [2024-07-26 05:16:48.879258] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.834 [2024-07-26 05:16:48.879268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.834 [2024-07-26 05:16:48.879278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.834 [2024-07-26 05:16:48.879287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.834 [2024-07-26 05:16:48.879296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.834 [2024-07-26 05:16:48.879306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.834 [2024-07-26 05:16:48.879316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.834 [2024-07-26 05:16:48.879331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:19:29.834 [2024-07-26 05:16:48.879341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.834 [2024-07-26 05:16:48.898792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.834 [2024-07-26 05:16:48.898822] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.834 [2024-07-26 05:16:48.898834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.426 ms 00:19:29.834 [2024-07-26 05:16:48.898844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.834 [2024-07-26 05:16:48.899096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.834 [2024-07-26 05:16:48.899107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.834 [2024-07-26 05:16:48.899117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:19:29.834 [2024-07-26 05:16:48.899127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:48.955800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:48.955834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.093 [2024-07-26 05:16:48.955846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:48.955872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:48.955963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:48.955975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.093 [2024-07-26 05:16:48.955986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:48.955995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:48.956044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:48.956056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.093 [2024-07-26 05:16:48.956066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:48.956075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:48.956095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:48.956105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.093 [2024-07-26 05:16:48.956123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:48.956133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.067831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.067887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.093 [2024-07-26 05:16:49.067902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.067911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.111613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.111660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.093 [2024-07-26 05:16:49.111673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.111683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.111759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.111770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.093 [2024-07-26 05:16:49.111780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.111789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.111817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.111827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.093 [2024-07-26 05:16:49.111837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.111850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.111953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.111965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.093 [2024-07-26 05:16:49.111975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.111984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.112017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.112028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.093 [2024-07-26 05:16:49.112038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.112051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.112088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.112098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.093 [2024-07-26 05:16:49.112107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.112117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.093 [2024-07-26 05:16:49.112161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.093 [2024-07-26 05:16:49.112172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.093 [2024-07-26 05:16:49.112182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.093 [2024-07-26 05:16:49.112194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.094 [2024-07-26 05:16:49.112399] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.266 ms, result 0 00:19:31.472 00:19:31.472 00:19:31.472 05:16:50 -- ftl/trim.sh@72 -- # svcpid=73611 00:19:31.472 05:16:50 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:31.472 05:16:50 -- ftl/trim.sh@73 -- # waitforlisten 73611 00:19:31.472 05:16:50 -- common/autotest_common.sh@819 -- # '[' -z 73611 ']' 00:19:31.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.472 05:16:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.472 05:16:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:31.472 05:16:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.472 05:16:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:31.472 05:16:50 -- common/autotest_common.sh@10 -- # set +x 00:19:31.472 [2024-07-26 05:16:50.529450] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:31.472 [2024-07-26 05:16:50.529832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73611 ] 00:19:31.731 [2024-07-26 05:16:50.693231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.990 [2024-07-26 05:16:50.916151] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:31.990 [2024-07-26 05:16:50.916540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.369 05:16:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:33.369 05:16:52 -- common/autotest_common.sh@852 -- # return 0 00:19:33.369 05:16:52 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:33.369 [2024-07-26 05:16:52.328093] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.369 [2024-07-26 05:16:52.328152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.628 [2024-07-26 05:16:52.502943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.502989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.628 [2024-07-26 05:16:52.503009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:33.628 [2024-07-26 05:16:52.503020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.506767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.506802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.628 [2024-07-26 05:16:52.506818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:19:33.628 [2024-07-26 05:16:52.506828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.506948] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.628 [2024-07-26 05:16:52.508109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.628 [2024-07-26 05:16:52.508146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.508158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.628 [2024-07-26 05:16:52.508173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:19:33.628 [2024-07-26 05:16:52.508184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.509688] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:33.628 [2024-07-26 05:16:52.529826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.529878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:33.628 [2024-07-26 05:16:52.529893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.145 ms 00:19:33.628 [2024-07-26 05:16:52.529907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.530015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.530034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:33.628 [2024-07-26 05:16:52.530045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:33.628 [2024-07-26 05:16:52.530060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.536792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.536827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.628 [2024-07-26 05:16:52.536838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.679 ms 00:19:33.628 [2024-07-26 05:16:52.536854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.536960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.536979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.628 [2024-07-26 05:16:52.536989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:33.628 [2024-07-26 05:16:52.537003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.537030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.537051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.628 [2024-07-26 05:16:52.537061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.628 [2024-07-26 05:16:52.537075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.537104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:33.628 [2024-07-26 05:16:52.542816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.542845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.628 [2024-07-26 05:16:52.542861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.717 ms 00:19:33.628 [2024-07-26 05:16:52.542870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.542943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.542954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.628 [2024-07-26 05:16:52.542969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.628 [2024-07-26 05:16:52.542978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.543004] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:33.628 [2024-07-26 05:16:52.543034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:33.628 [2024-07-26 05:16:52.543071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:33.628 [2024-07-26 05:16:52.543088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:33.628 [2024-07-26 05:16:52.543156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:33.628 [2024-07-26 05:16:52.543169] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.628 [2024-07-26 05:16:52.543186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:33.628 [2024-07-26 05:16:52.543199] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.628 [2024-07-26 05:16:52.543234] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.628 [2024-07-26 05:16:52.543261] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:33.628 [2024-07-26 05:16:52.543276] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.628 [2024-07-26 05:16:52.543286] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:33.628 [2024-07-26 05:16:52.543304] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:33.628 [2024-07-26 05:16:52.543315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.543330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.628 [2024-07-26 05:16:52.543340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:19:33.628 [2024-07-26 05:16:52.543354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.543416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.628 [2024-07-26 05:16:52.543432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.628 [2024-07-26 05:16:52.543447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:33.628 [2024-07-26 05:16:52.543462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.628 [2024-07-26 05:16:52.543532] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.628 [2024-07-26 05:16:52.543548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.628 [2024-07-26 05:16:52.543559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.628 [2024-07-26 05:16:52.543574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.628 [2024-07-26 05:16:52.543584] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.628 [2024-07-26 05:16:52.543600] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.629 [2024-07-26 05:16:52.543638] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.629 [2024-07-26 05:16:52.543661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.629 [2024-07-26 05:16:52.543675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:33.629 [2024-07-26 05:16:52.543685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.629 [2024-07-26 05:16:52.543698] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.629 [2024-07-26 05:16:52.543707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:33.629 [2024-07-26 05:16:52.543720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.629 [2024-07-26 05:16:52.543743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:33.629 [2024-07-26 05:16:52.543752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:33.629 [2024-07-26 05:16:52.543775] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:33.629 [2024-07-26 05:16:52.543789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.629 [2024-07-26 05:16:52.543816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.629 [2024-07-26 05:16:52.543848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.629 [2024-07-26 05:16:52.543886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.629 [2024-07-26 05:16:52.543932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:33.629 [2024-07-26 05:16:52.543954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.629 [2024-07-26 05:16:52.543968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:33.629 [2024-07-26 05:16:52.543977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.629 [2024-07-26 05:16:52.543991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.629 [2024-07-26 05:16:52.544000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:33.629 [2024-07-26 05:16:52.544018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.629 [2024-07-26 05:16:52.544026] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.629 [2024-07-26 05:16:52.544041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.629 [2024-07-26 05:16:52.544050] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.629 [2024-07-26 05:16:52.544070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.629 [2024-07-26 05:16:52.544081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.629 [2024-07-26 05:16:52.544095] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.629 [2024-07-26 05:16:52.544104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.629 [2024-07-26 05:16:52.544117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.629 [2024-07-26 05:16:52.544126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.629 [2024-07-26 05:16:52.544140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.629 [2024-07-26 05:16:52.544151] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.629 [2024-07-26 05:16:52.544167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.629 [2024-07-26 05:16:52.544179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:33.629 [2024-07-26 05:16:52.544195] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:33.629 [2024-07-26 05:16:52.544425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:33.629 [2024-07-26 05:16:52.544509] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:33.629 [2024-07-26 05:16:52.544562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:33.629 [2024-07-26 05:16:52.544614] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:33.629 [2024-07-26 05:16:52.544662] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:33.629 [2024-07-26 05:16:52.544714] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:33.629 [2024-07-26 05:16:52.544827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:33.629 [2024-07-26 05:16:52.544885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:33.629 [2024-07-26 05:16:52.544934] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:33.629 [2024-07-26 05:16:52.544987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:33.629 [2024-07-26 05:16:52.545036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:33.629 [2024-07-26 05:16:52.545202] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.629 [2024-07-26 05:16:52.545262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.629 [2024-07-26 05:16:52.545328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.629 [2024-07-26 05:16:52.545377] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.629 [2024-07-26 05:16:52.545484] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.629 [2024-07-26 05:16:52.545586] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.629 [2024-07-26 05:16:52.545688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.545726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.629 [2024-07-26 05:16:52.545763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.177 ms 00:19:33.629 [2024-07-26 05:16:52.545794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.571475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.571635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.629 [2024-07-26 05:16:52.571758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:19:33.629 [2024-07-26 05:16:52.571804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.571945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.571991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.629 [2024-07-26 05:16:52.572148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:33.629 [2024-07-26 05:16:52.572161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.627501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.627658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.629 [2024-07-26 05:16:52.627805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.310 ms 00:19:33.629 [2024-07-26 05:16:52.627846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.627947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.627984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.629 [2024-07-26 05:16:52.628021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.629 [2024-07-26 05:16:52.628174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.628702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.628752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.629 [2024-07-26 05:16:52.628853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:33.629 [2024-07-26 05:16:52.628946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.629099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.629180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.629 [2024-07-26 05:16:52.629294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:33.629 [2024-07-26 05:16:52.629334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.654987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.655018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.629 [2024-07-26 05:16:52.655035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.556 ms 00:19:33.629 [2024-07-26 05:16:52.655062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.675229] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:33.629 [2024-07-26 05:16:52.675263] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:33.629 [2024-07-26 05:16:52.675281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.675308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:33.629 [2024-07-26 05:16:52.675324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.095 ms 00:19:33.629 [2024-07-26 05:16:52.675334] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.704091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.704125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:33.629 [2024-07-26 05:16:52.704148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.665 ms 00:19:33.629 [2024-07-26 05:16:52.704158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.629 [2024-07-26 05:16:52.722424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.629 [2024-07-26 05:16:52.722468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:33.629 [2024-07-26 05:16:52.722485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.153 ms 00:19:33.629 [2024-07-26 05:16:52.722511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.741768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.741802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:33.888 [2024-07-26 05:16:52.741826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.167 ms 00:19:33.888 [2024-07-26 05:16:52.741836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.742411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.742432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:33.888 [2024-07-26 05:16:52.742448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:19:33.888 [2024-07-26 05:16:52.742458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.839102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.839162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:33.888 [2024-07-26 05:16:52.839183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.601 ms 00:19:33.888 [2024-07-26 05:16:52.839199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.851433] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:33.888 [2024-07-26 05:16:52.867750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.867803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.888 [2024-07-26 05:16:52.867819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.428 ms 00:19:33.888 [2024-07-26 05:16:52.867834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.867937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.867960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:33.888 [2024-07-26 05:16:52.867971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:33.888 [2024-07-26 05:16:52.867987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.888 [2024-07-26 05:16:52.868043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.888 [2024-07-26 05:16:52.868060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.888 [2024-07-26 05:16:52.868071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:33.888 [2024-07-26 05:16:52.868086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.871152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.889 [2024-07-26 05:16:52.871190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:33.889 [2024-07-26 05:16:52.871202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:19:33.889 [2024-07-26 05:16:52.871224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.871258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.889 [2024-07-26 05:16:52.871279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:33.889 [2024-07-26 05:16:52.871290] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.889 [2024-07-26 05:16:52.871310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.871357] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:33.889 [2024-07-26 05:16:52.871378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.889 [2024-07-26 05:16:52.871389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:33.889 [2024-07-26 05:16:52.871404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:33.889 [2024-07-26 05:16:52.871414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.909779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.889 [2024-07-26 05:16:52.909919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:33.889 [2024-07-26 05:16:52.910057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.329 ms 00:19:33.889 [2024-07-26 05:16:52.910096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.910241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.889 [2024-07-26 05:16:52.910286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:33.889 [2024-07-26 05:16:52.910370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:33.889 [2024-07-26 05:16:52.910407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.889 [2024-07-26 05:16:52.911525] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.889 [2024-07-26 05:16:52.917017] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.178 ms, result 0 00:19:33.889 [2024-07-26 05:16:52.918199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.889 Some configs were skipped because the RPC state that can call them passed over. 00:19:33.889 05:16:52 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:34.147 [2024-07-26 05:16:53.146420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.147 [2024-07-26 05:16:53.146611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:34.147 [2024-07-26 05:16:53.146714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.487 ms 00:19:34.147 [2024-07-26 05:16:53.146759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.147 [2024-07-26 05:16:53.146832] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 37.897 ms, result 0 00:19:34.147 true 00:19:34.147 05:16:53 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:34.416 [2024-07-26 05:16:53.424190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.416 [2024-07-26 05:16:53.424426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:34.416 [2024-07-26 05:16:53.424562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.363 ms 00:19:34.416 [2024-07-26 05:16:53.424601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.416 [2024-07-26 05:16:53.424683] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 37.850 ms, result 0 00:19:34.416 true 00:19:34.416 05:16:53 -- ftl/trim.sh@81 -- # killprocess 73611 00:19:34.416 05:16:53 -- common/autotest_common.sh@926 -- # '[' -z 73611 ']' 00:19:34.416 05:16:53 -- common/autotest_common.sh@930 -- # kill -0 73611 00:19:34.416 05:16:53 -- common/autotest_common.sh@931 -- # uname 00:19:34.416 05:16:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.416 05:16:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73611 00:19:34.416 killing process with pid 73611 00:19:34.416 05:16:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.416 05:16:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.416 05:16:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73611' 00:19:34.416 05:16:53 -- common/autotest_common.sh@945 -- # kill 73611 00:19:34.416 05:16:53 -- common/autotest_common.sh@950 -- # wait 73611 00:19:35.805 [2024-07-26 05:16:54.553756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.553820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:35.805 [2024-07-26 05:16:54.553836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.805 [2024-07-26 05:16:54.553848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.553872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:35.805 [2024-07-26 05:16:54.557649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.557682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:35.805 [2024-07-26 05:16:54.557699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.757 ms 00:19:35.805 [2024-07-26 05:16:54.557709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.557964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.557977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:35.805 [2024-07-26 05:16:54.557989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:19:35.805 [2024-07-26 05:16:54.557999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.561343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.561379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:35.805 [2024-07-26 05:16:54.561393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:19:35.805 [2024-07-26 05:16:54.561405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.566994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.567025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:35.805 [2024-07-26 05:16:54.567038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.547 ms 00:19:35.805 [2024-07-26 05:16:54.567047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.582185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.582224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:35.805 [2024-07-26 05:16:54.582268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.061 ms 00:19:35.805 [2024-07-26 05:16:54.582278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.592540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.592574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:35.805 [2024-07-26 05:16:54.592591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.202 ms 00:19:35.805 [2024-07-26 05:16:54.592600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.592756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.592769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:35.805 [2024-07-26 05:16:54.592782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:35.805 [2024-07-26 05:16:54.592791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.608319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.608350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:35.805 [2024-07-26 05:16:54.608364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.504 ms 00:19:35.805 [2024-07-26 05:16:54.608373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.623862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.623892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:35.805 [2024-07-26 05:16:54.623911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.432 ms 00:19:35.805 [2024-07-26 05:16:54.623920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.638944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.805 [2024-07-26 05:16:54.638973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:35.805 [2024-07-26 05:16:54.639003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.966 ms 00:19:35.805 [2024-07-26 05:16:54.639012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.805 [2024-07-26 05:16:54.654275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.806 [2024-07-26 05:16:54.654306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:35.806 [2024-07-26 05:16:54.654321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.199 ms 00:19:35.806 [2024-07-26 05:16:54.654330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.806 [2024-07-26 05:16:54.654368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:35.806 [2024-07-26 05:16:54.654384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.654994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:35.806 [2024-07-26 05:16:54.655298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:35.807 [2024-07-26 05:16:54.655648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:35.807 [2024-07-26 05:16:54.655682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:35.807 [2024-07-26 05:16:54.655699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:35.807 [2024-07-26 05:16:54.655714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:35.807 [2024-07-26 05:16:54.655723] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:35.807 [2024-07-26 05:16:54.655738] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:35.807 [2024-07-26 05:16:54.655749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:35.807 [2024-07-26 05:16:54.655763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:35.807 [2024-07-26 05:16:54.655773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:35.807 [2024-07-26 05:16:54.655787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:35.807 [2024-07-26 05:16:54.655796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:35.807 [2024-07-26 05:16:54.655810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.807 [2024-07-26 05:16:54.655821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:35.807 [2024-07-26 05:16:54.655837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:19:35.807 [2024-07-26 05:16:54.655846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.675133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.807 [2024-07-26 05:16:54.675165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.807 [2024-07-26 05:16:54.675186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.247 ms 00:19:35.807 [2024-07-26 05:16:54.675211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.675494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.807 [2024-07-26 05:16:54.675507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.807 [2024-07-26 05:16:54.675523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:19:35.807 [2024-07-26 05:16:54.675533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.742766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.742801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.807 [2024-07-26 05:16:54.742832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.742842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.742927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.742939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.807 [2024-07-26 05:16:54.742951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.742961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.743016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.743028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.807 [2024-07-26 05:16:54.743043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.743053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.743074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.743084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.807 [2024-07-26 05:16:54.743096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.743106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.864869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.864942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.807 [2024-07-26 05:16:54.864959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.864970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.807 [2024-07-26 05:16:54.909187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.807 [2024-07-26 05:16:54.909331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.807 [2024-07-26 05:16:54.909398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.807 [2024-07-26 05:16:54.909548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909598] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.807 [2024-07-26 05:16:54.909623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.807 [2024-07-26 05:16:54.909703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.807 [2024-07-26 05:16:54.909771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.807 [2024-07-26 05:16:54.909784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.807 [2024-07-26 05:16:54.909794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.807 [2024-07-26 05:16:54.909933] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 356.152 ms, result 0 00:19:37.193 05:16:56 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:37.193 05:16:56 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:37.193 [2024-07-26 05:16:56.239502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:37.193 [2024-07-26 05:16:56.239628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73688 ] 00:19:37.451 [2024-07-26 05:16:56.398575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.710 [2024-07-26 05:16:56.629436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.969 [2024-07-26 05:16:57.032543] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.969 [2024-07-26 05:16:57.032604] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:38.229 [2024-07-26 05:16:57.187460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.187509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.229 [2024-07-26 05:16:57.187541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.229 [2024-07-26 05:16:57.187555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.190795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.190831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.229 [2024-07-26 05:16:57.190859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:19:38.229 [2024-07-26 05:16:57.190872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.190977] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.229 [2024-07-26 05:16:57.192077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.229 [2024-07-26 05:16:57.192111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.192126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.229 [2024-07-26 05:16:57.192137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:19:38.229 [2024-07-26 05:16:57.192147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.193799] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:38.229 [2024-07-26 05:16:57.213738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.213777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:38.229 [2024-07-26 05:16:57.213790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.940 ms 00:19:38.229 [2024-07-26 05:16:57.213800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.213900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.213914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:38.229 [2024-07-26 05:16:57.213929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:38.229 [2024-07-26 05:16:57.213938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.220693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.220722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.229 [2024-07-26 05:16:57.220743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.712 ms 00:19:38.229 [2024-07-26 05:16:57.220770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.220879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.220896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.229 [2024-07-26 05:16:57.220908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:38.229 [2024-07-26 05:16:57.220918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.220947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.220958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.229 [2024-07-26 05:16:57.220969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.229 [2024-07-26 05:16:57.220979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.221006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:38.229 [2024-07-26 05:16:57.226792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.226824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.229 [2024-07-26 05:16:57.226851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.795 ms 00:19:38.229 [2024-07-26 05:16:57.226861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.226930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.226945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.229 [2024-07-26 05:16:57.226956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.229 [2024-07-26 05:16:57.226965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.226987] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:38.229 [2024-07-26 05:16:57.227009] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:38.229 [2024-07-26 05:16:57.227041] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:38.229 [2024-07-26 05:16:57.227058] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:38.229 [2024-07-26 05:16:57.227126] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:38.229 [2024-07-26 05:16:57.227155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.229 [2024-07-26 05:16:57.227168] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:38.229 [2024-07-26 05:16:57.227181] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.229 [2024-07-26 05:16:57.227193] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.229 [2024-07-26 05:16:57.227204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:38.229 [2024-07-26 05:16:57.227214] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.229 [2024-07-26 05:16:57.227224] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:38.229 [2024-07-26 05:16:57.227250] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:38.229 [2024-07-26 05:16:57.227262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.227275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.229 [2024-07-26 05:16:57.227285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:38.229 [2024-07-26 05:16:57.227295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.227356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.229 [2024-07-26 05:16:57.227367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.229 [2024-07-26 05:16:57.227377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:38.229 [2024-07-26 05:16:57.227386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.229 [2024-07-26 05:16:57.227460] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.229 [2024-07-26 05:16:57.227473] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.229 [2024-07-26 05:16:57.227484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.229 [2024-07-26 05:16:57.227497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.229 [2024-07-26 05:16:57.227507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.229 [2024-07-26 05:16:57.227516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.229 [2024-07-26 05:16:57.227526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:38.229 [2024-07-26 05:16:57.227535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.229 [2024-07-26 05:16:57.227545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:38.229 [2024-07-26 05:16:57.227554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.229 [2024-07-26 05:16:57.227563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.229 [2024-07-26 05:16:57.227574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:38.229 [2024-07-26 05:16:57.227583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.229 [2024-07-26 05:16:57.227592] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.229 [2024-07-26 05:16:57.227601] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:38.229 [2024-07-26 05:16:57.227610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.229 [2024-07-26 05:16:57.227619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.229 [2024-07-26 05:16:57.227629] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:38.229 [2024-07-26 05:16:57.227638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:38.230 [2024-07-26 05:16:57.227667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:38.230 [2024-07-26 05:16:57.227677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227686] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.230 [2024-07-26 05:16:57.227696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.230 [2024-07-26 05:16:57.227723] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.230 [2024-07-26 05:16:57.227750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227768] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.230 [2024-07-26 05:16:57.227777] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227795] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.230 [2024-07-26 05:16:57.227804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.230 [2024-07-26 05:16:57.227821] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.230 [2024-07-26 05:16:57.227830] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:38.230 [2024-07-26 05:16:57.227839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.230 [2024-07-26 05:16:57.227847] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.230 [2024-07-26 05:16:57.227857] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.230 [2024-07-26 05:16:57.227867] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.230 [2024-07-26 05:16:57.227888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.230 [2024-07-26 05:16:57.227897] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.230 [2024-07-26 05:16:57.227906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.230 [2024-07-26 05:16:57.227915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.230 [2024-07-26 05:16:57.227924] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.230 [2024-07-26 05:16:57.227934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.230 [2024-07-26 05:16:57.227944] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.230 [2024-07-26 05:16:57.227960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.230 [2024-07-26 05:16:57.227971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:38.230 [2024-07-26 05:16:57.227982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:38.230 [2024-07-26 05:16:57.227992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:38.230 [2024-07-26 05:16:57.228002] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:38.230 [2024-07-26 05:16:57.228012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:38.230 [2024-07-26 05:16:57.228022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:38.230 [2024-07-26 05:16:57.228032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:38.230 [2024-07-26 05:16:57.228043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:38.230 [2024-07-26 05:16:57.228053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:38.230 [2024-07-26 05:16:57.228063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:38.230 [2024-07-26 05:16:57.228073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:38.230 [2024-07-26 05:16:57.228083] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:38.230 [2024-07-26 05:16:57.228093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:38.230 [2024-07-26 05:16:57.228103] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.230 [2024-07-26 05:16:57.228114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.230 [2024-07-26 05:16:57.228125] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.230 [2024-07-26 05:16:57.228135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.230 [2024-07-26 05:16:57.228145] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.230 [2024-07-26 05:16:57.228155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.230 [2024-07-26 05:16:57.228165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.228180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.230 [2024-07-26 05:16:57.228191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:19:38.230 [2024-07-26 05:16:57.228200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.252805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.252838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.230 [2024-07-26 05:16:57.252851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.546 ms 00:19:38.230 [2024-07-26 05:16:57.252860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.252987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.253000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.230 [2024-07-26 05:16:57.253010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:38.230 [2024-07-26 05:16:57.253020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.316771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.316803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.230 [2024-07-26 05:16:57.316816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.727 ms 00:19:38.230 [2024-07-26 05:16:57.316826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.316906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.316918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.230 [2024-07-26 05:16:57.316929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.230 [2024-07-26 05:16:57.316939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.317405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.317424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.230 [2024-07-26 05:16:57.317435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:19:38.230 [2024-07-26 05:16:57.317446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.230 [2024-07-26 05:16:57.317558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.230 [2024-07-26 05:16:57.317577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.230 [2024-07-26 05:16:57.317588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:38.230 [2024-07-26 05:16:57.317598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.340664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.340700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.490 [2024-07-26 05:16:57.340713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.044 ms 00:19:38.490 [2024-07-26 05:16:57.340739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.360502] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.490 [2024-07-26 05:16:57.360539] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.490 [2024-07-26 05:16:57.360552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.360563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.490 [2024-07-26 05:16:57.360591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.701 ms 00:19:38.490 [2024-07-26 05:16:57.360600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.390164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.390201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.490 [2024-07-26 05:16:57.390221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.487 ms 00:19:38.490 [2024-07-26 05:16:57.390253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.408534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.408566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.490 [2024-07-26 05:16:57.408578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:19:38.490 [2024-07-26 05:16:57.408587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.427020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.427063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.490 [2024-07-26 05:16:57.427074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.345 ms 00:19:38.490 [2024-07-26 05:16:57.427083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.427643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.427671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.490 [2024-07-26 05:16:57.427682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:38.490 [2024-07-26 05:16:57.427692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.517978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.518037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.490 [2024-07-26 05:16:57.518053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.258 ms 00:19:38.490 [2024-07-26 05:16:57.518081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.529958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.490 [2024-07-26 05:16:57.546209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.546269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.490 [2024-07-26 05:16:57.546285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.013 ms 00:19:38.490 [2024-07-26 05:16:57.546295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.546408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.546421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.490 [2024-07-26 05:16:57.546432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.490 [2024-07-26 05:16:57.546443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.546501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.546517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.490 [2024-07-26 05:16:57.546527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:38.490 [2024-07-26 05:16:57.546536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.548624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.548654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:38.490 [2024-07-26 05:16:57.548665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:19:38.490 [2024-07-26 05:16:57.548674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.548705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.548716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.490 [2024-07-26 05:16:57.548727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.490 [2024-07-26 05:16:57.548739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.548775] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.490 [2024-07-26 05:16:57.548786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.548796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.490 [2024-07-26 05:16:57.548806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:38.490 [2024-07-26 05:16:57.548815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.585717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.585755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.490 [2024-07-26 05:16:57.585774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.877 ms 00:19:38.490 [2024-07-26 05:16:57.585800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.585906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.490 [2024-07-26 05:16:57.585920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.490 [2024-07-26 05:16:57.585930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:38.490 [2024-07-26 05:16:57.585940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.490 [2024-07-26 05:16:57.586853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.490 [2024-07-26 05:16:57.591951] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.100 ms, result 0 00:19:38.490 [2024-07-26 05:16:57.592857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.749 [2024-07-26 05:16:57.611861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.929  Copying: 34/256 [MB] (34 MBps) Copying: 63/256 [MB] (29 MBps) Copying: 94/256 [MB] (30 MBps) Copying: 125/256 [MB] (30 MBps) Copying: 155/256 [MB] (30 MBps) Copying: 187/256 [MB] (31 MBps) Copying: 217/256 [MB] (30 MBps) Copying: 247/256 [MB] (29 MBps) Copying: 256/256 [MB] (average 30 MBps)[2024-07-26 05:17:05.924027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.930 [2024-07-26 05:17:05.938571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.938610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:46.930 [2024-07-26 05:17:05.938642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.930 [2024-07-26 05:17:05.938658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:05.938682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:46.930 [2024-07-26 05:17:05.942252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.942281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:46.930 [2024-07-26 05:17:05.942308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:19:46.930 [2024-07-26 05:17:05.942318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:05.942567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.942580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:46.930 [2024-07-26 05:17:05.942591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:19:46.930 [2024-07-26 05:17:05.942601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:05.945548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.945575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:46.930 [2024-07-26 05:17:05.945587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:19:46.930 [2024-07-26 05:17:05.945597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:05.951389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.951416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:46.930 [2024-07-26 05:17:05.951428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:19:46.930 [2024-07-26 05:17:05.951453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:05.989049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:05.989084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:46.930 [2024-07-26 05:17:05.989113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.528 ms 00:19:46.930 [2024-07-26 05:17:05.989123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:06.010399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:06.010436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:46.930 [2024-07-26 05:17:06.010471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.208 ms 00:19:46.930 [2024-07-26 05:17:06.010482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.930 [2024-07-26 05:17:06.010628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.930 [2024-07-26 05:17:06.010641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:46.930 [2024-07-26 05:17:06.010652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:46.930 [2024-07-26 05:17:06.010663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.189 [2024-07-26 05:17:06.048936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.189 [2024-07-26 05:17:06.048972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:47.190 [2024-07-26 05:17:06.048996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.254 ms 00:19:47.190 [2024-07-26 05:17:06.049023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.190 [2024-07-26 05:17:06.087168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.190 [2024-07-26 05:17:06.087230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:47.190 [2024-07-26 05:17:06.087243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.077 ms 00:19:47.190 [2024-07-26 05:17:06.087253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.190 [2024-07-26 05:17:06.123043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.190 [2024-07-26 05:17:06.123078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.190 [2024-07-26 05:17:06.123091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.723 ms 00:19:47.190 [2024-07-26 05:17:06.123100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.190 [2024-07-26 05:17:06.158823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.190 [2024-07-26 05:17:06.158857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.190 [2024-07-26 05:17:06.158869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.598 ms 00:19:47.190 [2024-07-26 05:17:06.158878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.190 [2024-07-26 05:17:06.158958] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.190 [2024-07-26 05:17:06.158976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.158988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.158999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.190 [2024-07-26 05:17:06.159790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.159991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.191 [2024-07-26 05:17:06.160061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.191 [2024-07-26 05:17:06.160082] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:47.191 [2024-07-26 05:17:06.160093] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.191 [2024-07-26 05:17:06.160102] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.191 [2024-07-26 05:17:06.160112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.191 [2024-07-26 05:17:06.160122] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.191 [2024-07-26 05:17:06.160132] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.191 [2024-07-26 05:17:06.160142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.191 [2024-07-26 05:17:06.160152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.191 [2024-07-26 05:17:06.160161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.191 [2024-07-26 05:17:06.160170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.191 [2024-07-26 05:17:06.160180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.191 [2024-07-26 05:17:06.160194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.191 [2024-07-26 05:17:06.160213] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:19:47.191 [2024-07-26 05:17:06.160224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.179949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.191 [2024-07-26 05:17:06.179981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.191 [2024-07-26 05:17:06.179993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.704 ms 00:19:47.191 [2024-07-26 05:17:06.180003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.180289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.191 [2024-07-26 05:17:06.180310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.191 [2024-07-26 05:17:06.180321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:19:47.191 [2024-07-26 05:17:06.180331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.239044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.191 [2024-07-26 05:17:06.239081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.191 [2024-07-26 05:17:06.239096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.191 [2024-07-26 05:17:06.239123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.239213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.191 [2024-07-26 05:17:06.239237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.191 [2024-07-26 05:17:06.239249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.191 [2024-07-26 05:17:06.239258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.239305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.191 [2024-07-26 05:17:06.239317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.191 [2024-07-26 05:17:06.239327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.191 [2024-07-26 05:17:06.239337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.191 [2024-07-26 05:17:06.239357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.191 [2024-07-26 05:17:06.239372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.191 [2024-07-26 05:17:06.239382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.191 [2024-07-26 05:17:06.239391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.356785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.356841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.450 [2024-07-26 05:17:06.356856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.356883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.450 [2024-07-26 05:17:06.402503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.402528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.450 [2024-07-26 05:17:06.402627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.402637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.450 [2024-07-26 05:17:06.402705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.402716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.450 [2024-07-26 05:17:06.402858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.402867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.450 [2024-07-26 05:17:06.402926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.402940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.402978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.402990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.450 [2024-07-26 05:17:06.402999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.403010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.403054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.450 [2024-07-26 05:17:06.403065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.450 [2024-07-26 05:17:06.403079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.450 [2024-07-26 05:17:06.403093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.450 [2024-07-26 05:17:06.403230] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 464.660 ms, result 0 00:19:48.825 00:19:48.825 00:19:48.825 05:17:07 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:48.825 05:17:07 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:49.084 05:17:08 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.341 [2024-07-26 05:17:08.263020] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:49.341 [2024-07-26 05:17:08.263127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73809 ] 00:19:49.341 [2024-07-26 05:17:08.424725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.599 [2024-07-26 05:17:08.650794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.167 [2024-07-26 05:17:09.053669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.167 [2024-07-26 05:17:09.053730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.167 [2024-07-26 05:17:09.210140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.210185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.167 [2024-07-26 05:17:09.210201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:50.167 [2024-07-26 05:17:09.210223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.213548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.213582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.167 [2024-07-26 05:17:09.213594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:19:50.167 [2024-07-26 05:17:09.213608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.213708] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.167 [2024-07-26 05:17:09.214866] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.167 [2024-07-26 05:17:09.214896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.214910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.167 [2024-07-26 05:17:09.214921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:19:50.167 [2024-07-26 05:17:09.214931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.216364] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.167 [2024-07-26 05:17:09.236134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.236168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.167 [2024-07-26 05:17:09.236182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.771 ms 00:19:50.167 [2024-07-26 05:17:09.236193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.236313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.236327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.167 [2024-07-26 05:17:09.236342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:50.167 [2024-07-26 05:17:09.236352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.242991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.243016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.167 [2024-07-26 05:17:09.243028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.595 ms 00:19:50.167 [2024-07-26 05:17:09.243037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.243144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.243160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.167 [2024-07-26 05:17:09.243170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:50.167 [2024-07-26 05:17:09.243179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.243220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.243231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.167 [2024-07-26 05:17:09.243241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:50.167 [2024-07-26 05:17:09.243250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.243293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.167 [2024-07-26 05:17:09.249159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.249189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.167 [2024-07-26 05:17:09.249200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.875 ms 00:19:50.167 [2024-07-26 05:17:09.249225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.249318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.249336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.167 [2024-07-26 05:17:09.249347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:50.167 [2024-07-26 05:17:09.249357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.249381] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.167 [2024-07-26 05:17:09.249404] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:50.167 [2024-07-26 05:17:09.249437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.167 [2024-07-26 05:17:09.249455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:50.167 [2024-07-26 05:17:09.249531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:50.167 [2024-07-26 05:17:09.249544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.167 [2024-07-26 05:17:09.249557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:50.167 [2024-07-26 05:17:09.249570] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.167 [2024-07-26 05:17:09.249581] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.167 [2024-07-26 05:17:09.249593] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.167 [2024-07-26 05:17:09.249602] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.167 [2024-07-26 05:17:09.249612] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:50.167 [2024-07-26 05:17:09.249622] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:50.167 [2024-07-26 05:17:09.249633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.249646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.167 [2024-07-26 05:17:09.249656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:19:50.167 [2024-07-26 05:17:09.249666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.249727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-26 05:17:09.249738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.167 [2024-07-26 05:17:09.249748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:50.167 [2024-07-26 05:17:09.249758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-26 05:17:09.249828] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.167 [2024-07-26 05:17:09.249840] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.167 [2024-07-26 05:17:09.249850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.167 [2024-07-26 05:17:09.249864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.167 [2024-07-26 05:17:09.249875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.167 [2024-07-26 05:17:09.249885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.167 [2024-07-26 05:17:09.249894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.167 [2024-07-26 05:17:09.249904] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.167 [2024-07-26 05:17:09.249913] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.167 [2024-07-26 05:17:09.249922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.167 [2024-07-26 05:17:09.249931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.167 [2024-07-26 05:17:09.249941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.167 [2024-07-26 05:17:09.249949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.167 [2024-07-26 05:17:09.249959] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.167 [2024-07-26 05:17:09.249968] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:50.167 [2024-07-26 05:17:09.249977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.167 [2024-07-26 05:17:09.249986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.167 [2024-07-26 05:17:09.249995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:50.167 [2024-07-26 05:17:09.250004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.167 [2024-07-26 05:17:09.250022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:50.167 [2024-07-26 05:17:09.250031] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:50.167 [2024-07-26 05:17:09.250040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:50.167 [2024-07-26 05:17:09.250049] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.167 [2024-07-26 05:17:09.250059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.167 [2024-07-26 05:17:09.250068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.167 [2024-07-26 05:17:09.250077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.167 [2024-07-26 05:17:09.250087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:50.167 [2024-07-26 05:17:09.250096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.167 [2024-07-26 05:17:09.250105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.167 [2024-07-26 05:17:09.250114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.167 [2024-07-26 05:17:09.250123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.167 [2024-07-26 05:17:09.250132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.167 [2024-07-26 05:17:09.250141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:50.167 [2024-07-26 05:17:09.250150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:50.168 [2024-07-26 05:17:09.250159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.168 [2024-07-26 05:17:09.250168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.168 [2024-07-26 05:17:09.250178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.168 [2024-07-26 05:17:09.250187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.168 [2024-07-26 05:17:09.250196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:50.168 [2024-07-26 05:17:09.250217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.168 [2024-07-26 05:17:09.250226] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.168 [2024-07-26 05:17:09.250246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.168 [2024-07-26 05:17:09.250256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.168 [2024-07-26 05:17:09.250266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.168 [2024-07-26 05:17:09.250276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.168 [2024-07-26 05:17:09.250286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.168 [2024-07-26 05:17:09.250295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.168 [2024-07-26 05:17:09.250304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.168 [2024-07-26 05:17:09.250313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.168 [2024-07-26 05:17:09.250322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.168 [2024-07-26 05:17:09.250332] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.168 [2024-07-26 05:17:09.250348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.168 [2024-07-26 05:17:09.250359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.168 [2024-07-26 05:17:09.250370] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:50.168 [2024-07-26 05:17:09.250380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.168 [2024-07-26 05:17:09.250391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:50.168 [2024-07-26 05:17:09.250401] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:50.168 [2024-07-26 05:17:09.250411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:50.168 [2024-07-26 05:17:09.250421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:50.168 [2024-07-26 05:17:09.250432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:50.168 [2024-07-26 05:17:09.250442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:50.168 [2024-07-26 05:17:09.250452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:50.168 [2024-07-26 05:17:09.250462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:50.168 [2024-07-26 05:17:09.250472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:50.168 [2024-07-26 05:17:09.250483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:50.168 [2024-07-26 05:17:09.250493] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.168 [2024-07-26 05:17:09.250504] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.168 [2024-07-26 05:17:09.250515] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.168 [2024-07-26 05:17:09.250526] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.168 [2024-07-26 05:17:09.250536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.168 [2024-07-26 05:17:09.250547] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.168 [2024-07-26 05:17:09.250557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.168 [2024-07-26 05:17:09.250570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.168 [2024-07-26 05:17:09.250580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:19:50.168 [2024-07-26 05:17:09.250590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.276866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.276899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.427 [2024-07-26 05:17:09.276912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:19:50.427 [2024-07-26 05:17:09.276923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.277038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.277051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.427 [2024-07-26 05:17:09.277062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:50.427 [2024-07-26 05:17:09.277072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.342288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.342322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.427 [2024-07-26 05:17:09.342336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.192 ms 00:19:50.427 [2024-07-26 05:17:09.342346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.342424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.342436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.427 [2024-07-26 05:17:09.342447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.427 [2024-07-26 05:17:09.342457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.342899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.342917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.427 [2024-07-26 05:17:09.342928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:19:50.427 [2024-07-26 05:17:09.342938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.343048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.343061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.427 [2024-07-26 05:17:09.343071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:50.427 [2024-07-26 05:17:09.343082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.365833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.427 [2024-07-26 05:17:09.365863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.427 [2024-07-26 05:17:09.365876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.727 ms 00:19:50.427 [2024-07-26 05:17:09.365887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.427 [2024-07-26 05:17:09.385102] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:50.427 [2024-07-26 05:17:09.385135] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.428 [2024-07-26 05:17:09.385149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.428 [2024-07-26 05:17:09.385159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.428 [2024-07-26 05:17:09.385169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.150 ms 00:19:50.428 [2024-07-26 05:17:09.385179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.428 [2024-07-26 05:17:09.414963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.428 [2024-07-26 05:17:09.414995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.428 [2024-07-26 05:17:09.415008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.678 ms 00:19:50.428 [2024-07-26 05:17:09.415023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.428 [2024-07-26 05:17:09.434295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.428 [2024-07-26 05:17:09.434336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.428 [2024-07-26 05:17:09.434349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.198 ms 00:19:50.428 [2024-07-26 05:17:09.434359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.428 [2024-07-26 05:17:09.452747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.428 [2024-07-26 05:17:09.452785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.428 [2024-07-26 05:17:09.452813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:19:50.428 [2024-07-26 05:17:09.452823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.428 [2024-07-26 05:17:09.453342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.428 [2024-07-26 05:17:09.453358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.428 [2024-07-26 05:17:09.453369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:50.428 [2024-07-26 05:17:09.453379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.548573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.548621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.686 [2024-07-26 05:17:09.548638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.166 ms 00:19:50.686 [2024-07-26 05:17:09.548649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.561186] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.686 [2024-07-26 05:17:09.577785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.577826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.686 [2024-07-26 05:17:09.577842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.017 ms 00:19:50.686 [2024-07-26 05:17:09.577855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.577969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.577982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.686 [2024-07-26 05:17:09.577994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:50.686 [2024-07-26 05:17:09.578004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.578062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.578078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.686 [2024-07-26 05:17:09.578089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:50.686 [2024-07-26 05:17:09.578099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.580127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.580156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:50.686 [2024-07-26 05:17:09.580167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:19:50.686 [2024-07-26 05:17:09.580177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.580222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.580233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.686 [2024-07-26 05:17:09.580244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:50.686 [2024-07-26 05:17:09.580257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.580295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.686 [2024-07-26 05:17:09.580307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.580317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.686 [2024-07-26 05:17:09.580328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:50.686 [2024-07-26 05:17:09.580338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.686 [2024-07-26 05:17:09.619316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.686 [2024-07-26 05:17:09.619350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.687 [2024-07-26 05:17:09.619370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.952 ms 00:19:50.687 [2024-07-26 05:17:09.619380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.687 [2024-07-26 05:17:09.619486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.687 [2024-07-26 05:17:09.619500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.687 [2024-07-26 05:17:09.619511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:50.687 [2024-07-26 05:17:09.619520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.687 [2024-07-26 05:17:09.620488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.687 [2024-07-26 05:17:09.625887] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.000 ms, result 0 00:19:50.687 [2024-07-26 05:17:09.626665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.687 [2024-07-26 05:17:09.646221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.687  Copying: 4096/4096 [kB] (average 28 MBps)[2024-07-26 05:17:09.791468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.946 [2024-07-26 05:17:09.805702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.805734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.946 [2024-07-26 05:17:09.805748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.946 [2024-07-26 05:17:09.805765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.805789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.946 [2024-07-26 05:17:09.809094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.809118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.946 [2024-07-26 05:17:09.809130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.290 ms 00:19:50.946 [2024-07-26 05:17:09.809140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.810964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.810995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.946 [2024-07-26 05:17:09.811008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:19:50.946 [2024-07-26 05:17:09.811019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.814121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.814154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.946 [2024-07-26 05:17:09.814165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:19:50.946 [2024-07-26 05:17:09.814176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.819990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.820018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:50.946 [2024-07-26 05:17:09.820030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.783 ms 00:19:50.946 [2024-07-26 05:17:09.820056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.858929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.858959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.946 [2024-07-26 05:17:09.858989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.803 ms 00:19:50.946 [2024-07-26 05:17:09.858999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.880721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.880752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.946 [2024-07-26 05:17:09.880786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.656 ms 00:19:50.946 [2024-07-26 05:17:09.880796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.880939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.880953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.946 [2024-07-26 05:17:09.880964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:50.946 [2024-07-26 05:17:09.880974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.920335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.920366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:50.946 [2024-07-26 05:17:09.920390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.343 ms 00:19:50.946 [2024-07-26 05:17:09.920400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.957893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.957923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:50.946 [2024-07-26 05:17:09.957952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:19:50.946 [2024-07-26 05:17:09.957961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:09.995466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:09.995495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.946 [2024-07-26 05:17:09.995518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.439 ms 00:19:50.946 [2024-07-26 05:17:09.995544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:10.034888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.946 [2024-07-26 05:17:10.034920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.946 [2024-07-26 05:17:10.034934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.252 ms 00:19:50.946 [2024-07-26 05:17:10.034944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.946 [2024-07-26 05:17:10.035012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.946 [2024-07-26 05:17:10.035029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.946 [2024-07-26 05:17:10.035042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.946 [2024-07-26 05:17:10.035054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.946 [2024-07-26 05:17:10.035065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.946 [2024-07-26 05:17:10.035076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.947 [2024-07-26 05:17:10.035938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.035999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.948 [2024-07-26 05:17:10.036105] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.948 [2024-07-26 05:17:10.036127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:50.948 [2024-07-26 05:17:10.036139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.948 [2024-07-26 05:17:10.036148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.948 [2024-07-26 05:17:10.036158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.948 [2024-07-26 05:17:10.036168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.948 [2024-07-26 05:17:10.036178] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.948 [2024-07-26 05:17:10.036188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.948 [2024-07-26 05:17:10.036199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.948 [2024-07-26 05:17:10.036216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.948 [2024-07-26 05:17:10.036225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.948 [2024-07-26 05:17:10.036235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.948 [2024-07-26 05:17:10.036249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.948 [2024-07-26 05:17:10.036261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:19:50.948 [2024-07-26 05:17:10.036270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.054904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.206 [2024-07-26 05:17:10.054933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.206 [2024-07-26 05:17:10.054947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.612 ms 00:19:51.206 [2024-07-26 05:17:10.054957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.055204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.206 [2024-07-26 05:17:10.055233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.206 [2024-07-26 05:17:10.055244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:19:51.206 [2024-07-26 05:17:10.055254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.114266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.114301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.206 [2024-07-26 05:17:10.114314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.114325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.114436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.114450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.206 [2024-07-26 05:17:10.114461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.114471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.114520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.114532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.206 [2024-07-26 05:17:10.114543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.114553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.114578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.114589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.206 [2024-07-26 05:17:10.114599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.114609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.233456] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.233510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.206 [2024-07-26 05:17:10.233526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.233537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.206 [2024-07-26 05:17:10.280097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280199] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.206 [2024-07-26 05:17:10.280224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.206 [2024-07-26 05:17:10.280292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.206 [2024-07-26 05:17:10.280438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.206 [2024-07-26 05:17:10.280516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.206 [2024-07-26 05:17:10.280584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.206 [2024-07-26 05:17:10.280653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.206 [2024-07-26 05:17:10.280667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.206 [2024-07-26 05:17:10.280680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.206 [2024-07-26 05:17:10.280816] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.115 ms, result 0 00:19:52.629 00:19:52.629 00:19:52.629 05:17:11 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:52.629 05:17:11 -- ftl/trim.sh@93 -- # svcpid=73851 00:19:52.629 05:17:11 -- ftl/trim.sh@94 -- # waitforlisten 73851 00:19:52.629 05:17:11 -- common/autotest_common.sh@819 -- # '[' -z 73851 ']' 00:19:52.629 05:17:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.629 05:17:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:52.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.629 05:17:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.629 05:17:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:52.629 05:17:11 -- common/autotest_common.sh@10 -- # set +x 00:19:52.629 [2024-07-26 05:17:11.600484] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:52.629 [2024-07-26 05:17:11.600646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73851 ] 00:19:52.887 [2024-07-26 05:17:11.780268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.187 [2024-07-26 05:17:12.013689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:53.187 [2024-07-26 05:17:12.013874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.123 05:17:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.123 05:17:13 -- common/autotest_common.sh@852 -- # return 0 00:19:54.123 05:17:13 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:54.382 [2024-07-26 05:17:13.378592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.382 [2024-07-26 05:17:13.378651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.642 [2024-07-26 05:17:13.550821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.642 [2024-07-26 05:17:13.550888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.642 [2024-07-26 05:17:13.550906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:54.642 [2024-07-26 05:17:13.550916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.642 [2024-07-26 05:17:13.554071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.642 [2024-07-26 05:17:13.554109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.642 [2024-07-26 05:17:13.554124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.134 ms 00:19:54.642 [2024-07-26 05:17:13.554134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.642 [2024-07-26 05:17:13.554241] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.642 [2024-07-26 05:17:13.555509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.642 [2024-07-26 05:17:13.555543] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.642 [2024-07-26 05:17:13.555554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.642 [2024-07-26 05:17:13.555567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:19:54.642 [2024-07-26 05:17:13.555578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.642 [2024-07-26 05:17:13.557004] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.642 [2024-07-26 05:17:13.577612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.577657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.643 [2024-07-26 05:17:13.577671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.612 ms 00:19:54.643 [2024-07-26 05:17:13.577684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.577779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.577795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.643 [2024-07-26 05:17:13.577807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:54.643 [2024-07-26 05:17:13.577819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.584468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.584499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.643 [2024-07-26 05:17:13.584528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.600 ms 00:19:54.643 [2024-07-26 05:17:13.584542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.584631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.584648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.643 [2024-07-26 05:17:13.584659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:54.643 [2024-07-26 05:17:13.584671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.584699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.584717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.643 [2024-07-26 05:17:13.584727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:54.643 [2024-07-26 05:17:13.584739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.584768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:54.643 [2024-07-26 05:17:13.590509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.590557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.643 [2024-07-26 05:17:13.590573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.746 ms 00:19:54.643 [2024-07-26 05:17:13.590583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.590653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.590665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.643 [2024-07-26 05:17:13.590678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:54.643 [2024-07-26 05:17:13.590688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.590714] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.643 [2024-07-26 05:17:13.590739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:54.643 [2024-07-26 05:17:13.590775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.643 [2024-07-26 05:17:13.590792] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:54.643 [2024-07-26 05:17:13.590862] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:54.643 [2024-07-26 05:17:13.590875] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.643 [2024-07-26 05:17:13.590890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:54.643 [2024-07-26 05:17:13.590903] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.643 [2024-07-26 05:17:13.590920] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.643 [2024-07-26 05:17:13.590930] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:54.643 [2024-07-26 05:17:13.590943] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.643 [2024-07-26 05:17:13.590953] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:54.643 [2024-07-26 05:17:13.590967] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:54.643 [2024-07-26 05:17:13.590977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.590990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.643 [2024-07-26 05:17:13.591000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:54.643 [2024-07-26 05:17:13.591012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.591072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.643 [2024-07-26 05:17:13.591085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.643 [2024-07-26 05:17:13.591097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:54.643 [2024-07-26 05:17:13.591109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.643 [2024-07-26 05:17:13.591180] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.643 [2024-07-26 05:17:13.591216] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.643 [2024-07-26 05:17:13.591227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.643 [2024-07-26 05:17:13.591264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.643 [2024-07-26 05:17:13.591297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.643 [2024-07-26 05:17:13.591318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.643 [2024-07-26 05:17:13.591331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:54.643 [2024-07-26 05:17:13.591341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.643 [2024-07-26 05:17:13.591352] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.643 [2024-07-26 05:17:13.591361] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:54.643 [2024-07-26 05:17:13.591373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.643 [2024-07-26 05:17:13.591394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:54.643 [2024-07-26 05:17:13.591402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591414] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:54.643 [2024-07-26 05:17:13.591423] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:54.643 [2024-07-26 05:17:13.591435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.643 [2024-07-26 05:17:13.591458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.643 [2024-07-26 05:17:13.591488] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.643 [2024-07-26 05:17:13.591520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591551] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.643 [2024-07-26 05:17:13.591560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591580] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.643 [2024-07-26 05:17:13.591592] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.643 [2024-07-26 05:17:13.591611] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.643 [2024-07-26 05:17:13.591620] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:54.643 [2024-07-26 05:17:13.591634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.643 [2024-07-26 05:17:13.591643] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.643 [2024-07-26 05:17:13.591655] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.643 [2024-07-26 05:17:13.591664] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.643 [2024-07-26 05:17:13.591689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.643 [2024-07-26 05:17:13.591701] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.643 [2024-07-26 05:17:13.591710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.643 [2024-07-26 05:17:13.591722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.643 [2024-07-26 05:17:13.591731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.643 [2024-07-26 05:17:13.591742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.643 [2024-07-26 05:17:13.591753] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.643 [2024-07-26 05:17:13.591767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.644 [2024-07-26 05:17:13.591778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:54.644 [2024-07-26 05:17:13.591791] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:54.644 [2024-07-26 05:17:13.591801] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:54.644 [2024-07-26 05:17:13.591817] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:54.644 [2024-07-26 05:17:13.591828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:54.644 [2024-07-26 05:17:13.591840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:54.644 [2024-07-26 05:17:13.591850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:54.644 [2024-07-26 05:17:13.591862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:54.644 [2024-07-26 05:17:13.591874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:54.644 [2024-07-26 05:17:13.591886] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:54.644 [2024-07-26 05:17:13.591896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:54.644 [2024-07-26 05:17:13.591909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:54.644 [2024-07-26 05:17:13.591920] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:54.644 [2024-07-26 05:17:13.591932] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.644 [2024-07-26 05:17:13.591942] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.644 [2024-07-26 05:17:13.591956] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.644 [2024-07-26 05:17:13.591966] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.644 [2024-07-26 05:17:13.591979] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.644 [2024-07-26 05:17:13.591989] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.644 [2024-07-26 05:17:13.592005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.592014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.644 [2024-07-26 05:17:13.592027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:19:54.644 [2024-07-26 05:17:13.592036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.617087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.617119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.644 [2024-07-26 05:17:13.617135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.000 ms 00:19:54.644 [2024-07-26 05:17:13.617146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.617281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.617297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.644 [2024-07-26 05:17:13.617310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:54.644 [2024-07-26 05:17:13.617320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.670491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.670527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.644 [2024-07-26 05:17:13.670543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.146 ms 00:19:54.644 [2024-07-26 05:17:13.670554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.670623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.670635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.644 [2024-07-26 05:17:13.670648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.644 [2024-07-26 05:17:13.670660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.671093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.671113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.644 [2024-07-26 05:17:13.671128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:54.644 [2024-07-26 05:17:13.671138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.671264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.671283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.644 [2024-07-26 05:17:13.671296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:54.644 [2024-07-26 05:17:13.671306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.694481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.694529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.644 [2024-07-26 05:17:13.694547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.147 ms 00:19:54.644 [2024-07-26 05:17:13.694557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.714512] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:54.644 [2024-07-26 05:17:13.714547] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.644 [2024-07-26 05:17:13.714563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.714590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.644 [2024-07-26 05:17:13.714605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.885 ms 00:19:54.644 [2024-07-26 05:17:13.714615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.644 [2024-07-26 05:17:13.746301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.644 [2024-07-26 05:17:13.746339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.644 [2024-07-26 05:17:13.746358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.606 ms 00:19:54.644 [2024-07-26 05:17:13.746369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.765873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.765909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.904 [2024-07-26 05:17:13.765925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:19:54.904 [2024-07-26 05:17:13.765935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.785438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.785472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.904 [2024-07-26 05:17:13.785489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.425 ms 00:19:54.904 [2024-07-26 05:17:13.785515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.786021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.786049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.904 [2024-07-26 05:17:13.786063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:19:54.904 [2024-07-26 05:17:13.786073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.879194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.879259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.904 [2024-07-26 05:17:13.879278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.085 ms 00:19:54.904 [2024-07-26 05:17:13.879308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.891734] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:54.904 [2024-07-26 05:17:13.907950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.908006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.904 [2024-07-26 05:17:13.908037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.538 ms 00:19:54.904 [2024-07-26 05:17:13.908050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.908152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.908170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.904 [2024-07-26 05:17:13.908182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:54.904 [2024-07-26 05:17:13.908194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.908261] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.908276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.904 [2024-07-26 05:17:13.908287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:54.904 [2024-07-26 05:17:13.908299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.910325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.910355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:54.904 [2024-07-26 05:17:13.910366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:19:54.904 [2024-07-26 05:17:13.910379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.910408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.910424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.904 [2024-07-26 05:17:13.910434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.904 [2024-07-26 05:17:13.910450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.910489] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.904 [2024-07-26 05:17:13.910505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.910516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.904 [2024-07-26 05:17:13.910528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.904 [2024-07-26 05:17:13.910538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.948982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.949023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.904 [2024-07-26 05:17:13.949039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.414 ms 00:19:54.904 [2024-07-26 05:17:13.949065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.904 [2024-07-26 05:17:13.949174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.904 [2024-07-26 05:17:13.949187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.904 [2024-07-26 05:17:13.949200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:54.905 [2024-07-26 05:17:13.949224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.905 [2024-07-26 05:17:13.950163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.905 [2024-07-26 05:17:13.955566] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 399.008 ms, result 0 00:19:54.905 [2024-07-26 05:17:13.956772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.905 Some configs were skipped because the RPC state that can call them passed over. 00:19:54.905 05:17:14 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:55.163 [2024-07-26 05:17:14.198931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.163 [2024-07-26 05:17:14.198990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:55.163 [2024-07-26 05:17:14.199021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.850 ms 00:19:55.163 [2024-07-26 05:17:14.199034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.163 [2024-07-26 05:17:14.199071] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 39.994 ms, result 0 00:19:55.163 true 00:19:55.163 05:17:14 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:55.422 [2024-07-26 05:17:14.401844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.422 [2024-07-26 05:17:14.401890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:55.422 [2024-07-26 05:17:14.401908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.609 ms 00:19:55.422 [2024-07-26 05:17:14.401919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.422 [2024-07-26 05:17:14.401963] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 38.729 ms, result 0 00:19:55.422 true 00:19:55.422 05:17:14 -- ftl/trim.sh@102 -- # killprocess 73851 00:19:55.422 05:17:14 -- common/autotest_common.sh@926 -- # '[' -z 73851 ']' 00:19:55.422 05:17:14 -- common/autotest_common.sh@930 -- # kill -0 73851 00:19:55.422 05:17:14 -- common/autotest_common.sh@931 -- # uname 00:19:55.422 05:17:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.422 05:17:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73851 00:19:55.422 05:17:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.422 05:17:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.422 killing process with pid 73851 00:19:55.422 05:17:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73851' 00:19:55.422 05:17:14 -- common/autotest_common.sh@945 -- # kill 73851 00:19:55.422 05:17:14 -- common/autotest_common.sh@950 -- # wait 73851 00:19:56.816 [2024-07-26 05:17:15.572615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.816 [2024-07-26 05:17:15.572671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.816 [2024-07-26 05:17:15.572687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.816 [2024-07-26 05:17:15.572699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.816 [2024-07-26 05:17:15.572723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:56.817 [2024-07-26 05:17:15.576491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.576524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.817 [2024-07-26 05:17:15.576539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.747 ms 00:19:56.817 [2024-07-26 05:17:15.576549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.576822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.576840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.817 [2024-07-26 05:17:15.576853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:56.817 [2024-07-26 05:17:15.576863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.580241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.580277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.817 [2024-07-26 05:17:15.580291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:19:56.817 [2024-07-26 05:17:15.580304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.586347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.586381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:56.817 [2024-07-26 05:17:15.586396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.999 ms 00:19:56.817 [2024-07-26 05:17:15.586406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.602374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.602406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.817 [2024-07-26 05:17:15.602434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.906 ms 00:19:56.817 [2024-07-26 05:17:15.602444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.613468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.613502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.817 [2024-07-26 05:17:15.613537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.964 ms 00:19:56.817 [2024-07-26 05:17:15.613547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.613692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.613705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.817 [2024-07-26 05:17:15.613718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:56.817 [2024-07-26 05:17:15.613728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.630149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.630182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:56.817 [2024-07-26 05:17:15.630197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.397 ms 00:19:56.817 [2024-07-26 05:17:15.630267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.646370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.646412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:56.817 [2024-07-26 05:17:15.646448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.044 ms 00:19:56.817 [2024-07-26 05:17:15.646457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.661704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.661737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.817 [2024-07-26 05:17:15.661752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.205 ms 00:19:56.817 [2024-07-26 05:17:15.661762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.677165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.677197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.817 [2024-07-26 05:17:15.677218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.336 ms 00:19:56.817 [2024-07-26 05:17:15.677243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.677289] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.817 [2024-07-26 05:17:15.677305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.677996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.817 [2024-07-26 05:17:15.678509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.817 [2024-07-26 05:17:15.678534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:19:56.817 [2024-07-26 05:17:15.678547] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.817 [2024-07-26 05:17:15.678559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.817 [2024-07-26 05:17:15.678569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.817 [2024-07-26 05:17:15.678581] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.817 [2024-07-26 05:17:15.678590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.817 [2024-07-26 05:17:15.678603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.817 [2024-07-26 05:17:15.678613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.817 [2024-07-26 05:17:15.678624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.817 [2024-07-26 05:17:15.678633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.817 [2024-07-26 05:17:15.678645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.678655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.817 [2024-07-26 05:17:15.678668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:19:56.817 [2024-07-26 05:17:15.678678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.698387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.698419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.817 [2024-07-26 05:17:15.698453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:19:56.817 [2024-07-26 05:17:15.698463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.698745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.817 [2024-07-26 05:17:15.698763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.817 [2024-07-26 05:17:15.698776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:19:56.817 [2024-07-26 05:17:15.698786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.767440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.817 [2024-07-26 05:17:15.767479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.817 [2024-07-26 05:17:15.767494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.817 [2024-07-26 05:17:15.767505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.767590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.817 [2024-07-26 05:17:15.767603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.817 [2024-07-26 05:17:15.767615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.817 [2024-07-26 05:17:15.767625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.817 [2024-07-26 05:17:15.767682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.817 [2024-07-26 05:17:15.767694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.818 [2024-07-26 05:17:15.767710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.818 [2024-07-26 05:17:15.767720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.818 [2024-07-26 05:17:15.767742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.818 [2024-07-26 05:17:15.767752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.818 [2024-07-26 05:17:15.767764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.818 [2024-07-26 05:17:15.767774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.818 [2024-07-26 05:17:15.894862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.818 [2024-07-26 05:17:15.894917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.818 [2024-07-26 05:17:15.894935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.818 [2024-07-26 05:17:15.894962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.076 [2024-07-26 05:17:15.943304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.076 [2024-07-26 05:17:15.943427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.076 [2024-07-26 05:17:15.943493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.076 [2024-07-26 05:17:15.943645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.076 [2024-07-26 05:17:15.943719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.076 [2024-07-26 05:17:15.943799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.943856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.076 [2024-07-26 05:17:15.943867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.076 [2024-07-26 05:17:15.943880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.076 [2024-07-26 05:17:15.943890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.076 [2024-07-26 05:17:15.944027] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.386 ms, result 0 00:19:58.454 05:17:17 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.454 [2024-07-26 05:17:17.368432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:58.454 [2024-07-26 05:17:17.368596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73926 ] 00:19:58.454 [2024-07-26 05:17:17.549494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.713 [2024-07-26 05:17:17.782406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.281 [2024-07-26 05:17:18.195799] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.281 [2024-07-26 05:17:18.195883] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.281 [2024-07-26 05:17:18.352252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.352302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.281 [2024-07-26 05:17:18.352318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:59.281 [2024-07-26 05:17:18.352332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.355519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.355556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.281 [2024-07-26 05:17:18.355569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.168 ms 00:19:59.281 [2024-07-26 05:17:18.355597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.355687] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.281 [2024-07-26 05:17:18.356845] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.281 [2024-07-26 05:17:18.356879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.356894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.281 [2024-07-26 05:17:18.356904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:19:59.281 [2024-07-26 05:17:18.356914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.358367] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:59.281 [2024-07-26 05:17:18.378371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.378420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:59.281 [2024-07-26 05:17:18.378435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.005 ms 00:19:59.281 [2024-07-26 05:17:18.378444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.378560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.378584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:59.281 [2024-07-26 05:17:18.378598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:59.281 [2024-07-26 05:17:18.378608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.385257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.385292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.281 [2024-07-26 05:17:18.385304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.608 ms 00:19:59.281 [2024-07-26 05:17:18.385314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.385421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.385439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.281 [2024-07-26 05:17:18.385450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:59.281 [2024-07-26 05:17:18.385460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.385489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.281 [2024-07-26 05:17:18.385501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.281 [2024-07-26 05:17:18.385511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:59.281 [2024-07-26 05:17:18.385521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.281 [2024-07-26 05:17:18.385547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:59.541 [2024-07-26 05:17:18.391424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.541 [2024-07-26 05:17:18.391456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.541 [2024-07-26 05:17:18.391468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:19:59.541 [2024-07-26 05:17:18.391478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.541 [2024-07-26 05:17:18.391563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.541 [2024-07-26 05:17:18.391578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.541 [2024-07-26 05:17:18.391589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:59.541 [2024-07-26 05:17:18.391599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.541 [2024-07-26 05:17:18.391620] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:59.541 [2024-07-26 05:17:18.391641] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:59.541 [2024-07-26 05:17:18.391675] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:59.541 [2024-07-26 05:17:18.391693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:59.541 [2024-07-26 05:17:18.391762] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:59.541 [2024-07-26 05:17:18.391775] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.541 [2024-07-26 05:17:18.391787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:59.541 [2024-07-26 05:17:18.391800] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.541 [2024-07-26 05:17:18.391812] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.541 [2024-07-26 05:17:18.391823] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:59.541 [2024-07-26 05:17:18.391832] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.541 [2024-07-26 05:17:18.391842] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:59.541 [2024-07-26 05:17:18.391852] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:59.541 [2024-07-26 05:17:18.391862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.541 [2024-07-26 05:17:18.391875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.541 [2024-07-26 05:17:18.391885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:59.541 [2024-07-26 05:17:18.391895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.541 [2024-07-26 05:17:18.391955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.541 [2024-07-26 05:17:18.391966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.541 [2024-07-26 05:17:18.391976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:59.541 [2024-07-26 05:17:18.391986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.392055] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.542 [2024-07-26 05:17:18.392067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.542 [2024-07-26 05:17:18.392077] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.542 [2024-07-26 05:17:18.392109] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392128] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.542 [2024-07-26 05:17:18.392137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.542 [2024-07-26 05:17:18.392158] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.542 [2024-07-26 05:17:18.392167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:59.542 [2024-07-26 05:17:18.392176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.542 [2024-07-26 05:17:18.392185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.542 [2024-07-26 05:17:18.392194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:59.542 [2024-07-26 05:17:18.392203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.542 [2024-07-26 05:17:18.392232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:59.542 [2024-07-26 05:17:18.392241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392260] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:59.542 [2024-07-26 05:17:18.392270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:59.542 [2024-07-26 05:17:18.392279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.542 [2024-07-26 05:17:18.392297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392316] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.542 [2024-07-26 05:17:18.392325] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.542 [2024-07-26 05:17:18.392351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.542 [2024-07-26 05:17:18.392377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.542 [2024-07-26 05:17:18.392403] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392412] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.542 [2024-07-26 05:17:18.392420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.542 [2024-07-26 05:17:18.392429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:59.542 [2024-07-26 05:17:18.392438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.542 [2024-07-26 05:17:18.392446] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.542 [2024-07-26 05:17:18.392456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.542 [2024-07-26 05:17:18.392466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.542 [2024-07-26 05:17:18.392485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.542 [2024-07-26 05:17:18.392494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.542 [2024-07-26 05:17:18.392503] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.542 [2024-07-26 05:17:18.392513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.542 [2024-07-26 05:17:18.392521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.542 [2024-07-26 05:17:18.392531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.542 [2024-07-26 05:17:18.392540] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.542 [2024-07-26 05:17:18.392558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.542 [2024-07-26 05:17:18.392569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:59.542 [2024-07-26 05:17:18.392579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:59.542 [2024-07-26 05:17:18.392589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:59.542 [2024-07-26 05:17:18.392599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:59.542 [2024-07-26 05:17:18.392609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:59.542 [2024-07-26 05:17:18.392619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:59.542 [2024-07-26 05:17:18.392630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:59.542 [2024-07-26 05:17:18.392640] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:59.542 [2024-07-26 05:17:18.392649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:59.542 [2024-07-26 05:17:18.392659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:59.542 [2024-07-26 05:17:18.392669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:59.542 [2024-07-26 05:17:18.392679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:59.542 [2024-07-26 05:17:18.392689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:59.542 [2024-07-26 05:17:18.392699] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.542 [2024-07-26 05:17:18.392710] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.542 [2024-07-26 05:17:18.392721] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.542 [2024-07-26 05:17:18.392731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.542 [2024-07-26 05:17:18.392741] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.542 [2024-07-26 05:17:18.392751] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.542 [2024-07-26 05:17:18.392761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.392774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.542 [2024-07-26 05:17:18.392786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:19:59.542 [2024-07-26 05:17:18.392796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.420350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.420386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.542 [2024-07-26 05:17:18.420399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.506 ms 00:19:59.542 [2024-07-26 05:17:18.420410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.420528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.420540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.542 [2024-07-26 05:17:18.420551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:59.542 [2024-07-26 05:17:18.420561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.491868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.491911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.542 [2024-07-26 05:17:18.491925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.281 ms 00:19:59.542 [2024-07-26 05:17:18.491936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.492038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.492050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.542 [2024-07-26 05:17:18.492061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:59.542 [2024-07-26 05:17:18.492071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.492535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.492557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.542 [2024-07-26 05:17:18.492569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:19:59.542 [2024-07-26 05:17:18.492580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.542 [2024-07-26 05:17:18.492705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.542 [2024-07-26 05:17:18.492722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.543 [2024-07-26 05:17:18.492734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:59.543 [2024-07-26 05:17:18.492744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.517169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.517220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.543 [2024-07-26 05:17:18.517236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.398 ms 00:19:59.543 [2024-07-26 05:17:18.517246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.536897] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:59.543 [2024-07-26 05:17:18.536938] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.543 [2024-07-26 05:17:18.536953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.536963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.543 [2024-07-26 05:17:18.536975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.560 ms 00:19:59.543 [2024-07-26 05:17:18.536984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.568315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.568355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.543 [2024-07-26 05:17:18.568370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.248 ms 00:19:59.543 [2024-07-26 05:17:18.568386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.588399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.588448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.543 [2024-07-26 05:17:18.588461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.930 ms 00:19:59.543 [2024-07-26 05:17:18.588487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.607539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.607584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.543 [2024-07-26 05:17:18.607597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.977 ms 00:19:59.543 [2024-07-26 05:17:18.607623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.543 [2024-07-26 05:17:18.608101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.543 [2024-07-26 05:17:18.608122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.543 [2024-07-26 05:17:18.608134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:19:59.543 [2024-07-26 05:17:18.608143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.700281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.700352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:59.802 [2024-07-26 05:17:18.700368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.111 ms 00:19:59.802 [2024-07-26 05:17:18.700379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.713114] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.802 [2024-07-26 05:17:18.729641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.729696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.802 [2024-07-26 05:17:18.729711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.145 ms 00:19:59.802 [2024-07-26 05:17:18.729722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.729832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.729846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.802 [2024-07-26 05:17:18.729857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:59.802 [2024-07-26 05:17:18.729872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.729929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.729940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.802 [2024-07-26 05:17:18.729951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:59.802 [2024-07-26 05:17:18.729960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.732001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.732032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:59.802 [2024-07-26 05:17:18.732042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:19:59.802 [2024-07-26 05:17:18.732052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.732083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.732094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.802 [2024-07-26 05:17:18.732108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.802 [2024-07-26 05:17:18.732118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.732154] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.802 [2024-07-26 05:17:18.732166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.732175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.802 [2024-07-26 05:17:18.732185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:59.802 [2024-07-26 05:17:18.732195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.770785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.770827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.802 [2024-07-26 05:17:18.770856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.555 ms 00:19:59.802 [2024-07-26 05:17:18.770867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.770973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.802 [2024-07-26 05:17:18.770986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.802 [2024-07-26 05:17:18.770997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:59.802 [2024-07-26 05:17:18.771008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.802 [2024-07-26 05:17:18.771962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.802 [2024-07-26 05:17:18.777168] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.416 ms, result 0 00:19:59.802 [2024-07-26 05:17:18.777991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.802 [2024-07-26 05:17:18.796650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:08.435  Copying: 33/256 [MB] (33 MBps) Copying: 64/256 [MB] (31 MBps) Copying: 96/256 [MB] (31 MBps) Copying: 127/256 [MB] (31 MBps) Copying: 156/256 [MB] (29 MBps) Copying: 186/256 [MB] (29 MBps) Copying: 215/256 [MB] (29 MBps) Copying: 246/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 30 MBps)[2024-07-26 05:17:27.476425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.435 [2024-07-26 05:17:27.492754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.435 [2024-07-26 05:17:27.492799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.435 [2024-07-26 05:17:27.492821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.435 [2024-07-26 05:17:27.492832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.435 [2024-07-26 05:17:27.492860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:08.435 [2024-07-26 05:17:27.496596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.435 [2024-07-26 05:17:27.496628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.435 [2024-07-26 05:17:27.496641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:20:08.435 [2024-07-26 05:17:27.496650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.435 [2024-07-26 05:17:27.496924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.435 [2024-07-26 05:17:27.496937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.435 [2024-07-26 05:17:27.496948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:08.435 [2024-07-26 05:17:27.496958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.435 [2024-07-26 05:17:27.500400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.435 [2024-07-26 05:17:27.500426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.435 [2024-07-26 05:17:27.500438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.419 ms 00:20:08.435 [2024-07-26 05:17:27.500448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.435 [2024-07-26 05:17:27.506242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.435 [2024-07-26 05:17:27.506274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:08.435 [2024-07-26 05:17:27.506286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.768 ms 00:20:08.435 [2024-07-26 05:17:27.506296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.546390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.546431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.696 [2024-07-26 05:17:27.546446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.007 ms 00:20:08.696 [2024-07-26 05:17:27.546456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.567976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.568018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.696 [2024-07-26 05:17:27.568032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.460 ms 00:20:08.696 [2024-07-26 05:17:27.568042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.568225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.568240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.696 [2024-07-26 05:17:27.568252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:08.696 [2024-07-26 05:17:27.568262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.607101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.607136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:08.696 [2024-07-26 05:17:27.607161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.819 ms 00:20:08.696 [2024-07-26 05:17:27.607187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.645139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.645173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:08.696 [2024-07-26 05:17:27.645185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.884 ms 00:20:08.696 [2024-07-26 05:17:27.645195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.682738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.682773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.696 [2024-07-26 05:17:27.682786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.468 ms 00:20:08.696 [2024-07-26 05:17:27.682795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.721293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.696 [2024-07-26 05:17:27.721325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.696 [2024-07-26 05:17:27.721338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.418 ms 00:20:08.696 [2024-07-26 05:17:27.721347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.696 [2024-07-26 05:17:27.721400] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.696 [2024-07-26 05:17:27.721418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.696 [2024-07-26 05:17:27.721912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.721994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.697 [2024-07-26 05:17:27.722489] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.697 [2024-07-26 05:17:27.722510] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0bdf932e-f5a4-44d7-9d31-9e46b2e97560 00:20:08.697 [2024-07-26 05:17:27.722521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.697 [2024-07-26 05:17:27.722531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.697 [2024-07-26 05:17:27.722541] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.697 [2024-07-26 05:17:27.722550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.697 [2024-07-26 05:17:27.722560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.697 [2024-07-26 05:17:27.722570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.697 [2024-07-26 05:17:27.722584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.697 [2024-07-26 05:17:27.722593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.697 [2024-07-26 05:17:27.722602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.697 [2024-07-26 05:17:27.722613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.697 [2024-07-26 05:17:27.722624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.697 [2024-07-26 05:17:27.722634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:20:08.697 [2024-07-26 05:17:27.722649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.741781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.697 [2024-07-26 05:17:27.741814] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.697 [2024-07-26 05:17:27.741826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.110 ms 00:20:08.697 [2024-07-26 05:17:27.741836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.742098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.697 [2024-07-26 05:17:27.742139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.697 [2024-07-26 05:17:27.742149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:20:08.697 [2024-07-26 05:17:27.742159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.801532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.697 [2024-07-26 05:17:27.801569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.697 [2024-07-26 05:17:27.801587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.697 [2024-07-26 05:17:27.801603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.801694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.697 [2024-07-26 05:17:27.801706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.697 [2024-07-26 05:17:27.801717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.697 [2024-07-26 05:17:27.801727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.801777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.697 [2024-07-26 05:17:27.801789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.697 [2024-07-26 05:17:27.801800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.697 [2024-07-26 05:17:27.801810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.697 [2024-07-26 05:17:27.801834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.697 [2024-07-26 05:17:27.801845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.697 [2024-07-26 05:17:27.801855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.697 [2024-07-26 05:17:27.801864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.919863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.919922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.957 [2024-07-26 05:17:27.919938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.919953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.957 [2024-07-26 05:17:27.968452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.957 [2024-07-26 05:17:27.968563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.957 [2024-07-26 05:17:27.968629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.957 [2024-07-26 05:17:27.968769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.957 [2024-07-26 05:17:27.968846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.957 [2024-07-26 05:17:27.968915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.968924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.968969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.957 [2024-07-26 05:17:27.968984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.957 [2024-07-26 05:17:27.968998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.957 [2024-07-26 05:17:27.969007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.957 [2024-07-26 05:17:27.969145] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 476.414 ms, result 0 00:20:10.336 00:20:10.336 00:20:10.336 05:17:29 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.904 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:10.904 05:17:29 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:10.904 05:17:29 -- ftl/trim.sh@109 -- # fio_kill 00:20:10.904 05:17:29 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.904 05:17:29 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.904 05:17:29 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:10.904 05:17:29 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:10.904 05:17:29 -- ftl/trim.sh@20 -- # killprocess 73851 00:20:10.904 Process with pid 73851 is not found 00:20:10.904 05:17:29 -- common/autotest_common.sh@926 -- # '[' -z 73851 ']' 00:20:10.904 05:17:29 -- common/autotest_common.sh@930 -- # kill -0 73851 00:20:10.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (73851) - No such process 00:20:10.904 05:17:29 -- common/autotest_common.sh@953 -- # echo 'Process with pid 73851 is not found' 00:20:10.904 ************************************ 00:20:10.904 END TEST ftl_trim 00:20:10.904 ************************************ 00:20:10.904 00:20:10.904 real 1m8.415s 00:20:10.904 user 1m35.841s 00:20:10.904 sys 0m6.770s 00:20:10.904 05:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.904 05:17:29 -- common/autotest_common.sh@10 -- # set +x 00:20:10.904 05:17:29 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:10.904 05:17:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:20:10.904 05:17:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.904 05:17:29 -- common/autotest_common.sh@10 -- # set +x 00:20:10.904 ************************************ 00:20:10.904 START TEST ftl_restore 00:20:10.904 ************************************ 00:20:10.904 05:17:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:20:11.163 * Looking for test storage... 00:20:11.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.164 05:17:30 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:11.164 05:17:30 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:11.164 05:17:30 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.164 05:17:30 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.164 05:17:30 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:11.164 05:17:30 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:11.164 05:17:30 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.164 05:17:30 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:11.164 05:17:30 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:11.164 05:17:30 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.164 05:17:30 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.164 05:17:30 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:11.164 05:17:30 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:11.164 05:17:30 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.164 05:17:30 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.164 05:17:30 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:11.164 05:17:30 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:11.164 05:17:30 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.164 05:17:30 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.164 05:17:30 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:11.164 05:17:30 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:11.164 05:17:30 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.164 05:17:30 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.164 05:17:30 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.164 05:17:30 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.164 05:17:30 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:11.164 05:17:30 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:11.164 05:17:30 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.164 05:17:30 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.164 05:17:30 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.164 05:17:30 -- ftl/restore.sh@13 -- # mktemp -d 00:20:11.164 05:17:30 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Ze6Fny7MPi 00:20:11.164 05:17:30 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:11.164 05:17:30 -- ftl/restore.sh@16 -- # case $opt in 00:20:11.164 05:17:30 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:20:11.164 05:17:30 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:11.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.164 05:17:30 -- ftl/restore.sh@23 -- # shift 2 00:20:11.164 05:17:30 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:20:11.164 05:17:30 -- ftl/restore.sh@25 -- # timeout=240 00:20:11.164 05:17:30 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:11.164 05:17:30 -- ftl/restore.sh@39 -- # svcpid=74118 00:20:11.164 05:17:30 -- ftl/restore.sh@41 -- # waitforlisten 74118 00:20:11.164 05:17:30 -- common/autotest_common.sh@819 -- # '[' -z 74118 ']' 00:20:11.164 05:17:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.164 05:17:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.164 05:17:30 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.164 05:17:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.164 05:17:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.164 05:17:30 -- common/autotest_common.sh@10 -- # set +x 00:20:11.164 [2024-07-26 05:17:30.206069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:11.164 [2024-07-26 05:17:30.206490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74118 ] 00:20:11.423 [2024-07-26 05:17:30.394988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.682 [2024-07-26 05:17:30.690158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:11.682 [2024-07-26 05:17:30.690582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.060 05:17:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:13.060 05:17:31 -- common/autotest_common.sh@852 -- # return 0 00:20:13.060 05:17:31 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:20:13.060 05:17:31 -- ftl/common.sh@54 -- # local name=nvme0 00:20:13.060 05:17:31 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:20:13.060 05:17:31 -- ftl/common.sh@56 -- # local size=103424 00:20:13.060 05:17:31 -- ftl/common.sh@59 -- # local base_bdev 00:20:13.060 05:17:31 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:20:13.060 05:17:32 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:13.060 05:17:32 -- ftl/common.sh@62 -- # local base_size 00:20:13.060 05:17:32 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:13.060 05:17:32 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:20:13.060 05:17:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:13.060 05:17:32 -- common/autotest_common.sh@1359 -- # local bs 00:20:13.060 05:17:32 -- common/autotest_common.sh@1360 -- # local nb 00:20:13.060 05:17:32 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:13.319 05:17:32 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:13.319 { 00:20:13.319 "name": "nvme0n1", 00:20:13.319 "aliases": [ 00:20:13.319 "7f2e19a9-2caa-43e5-831a-88472638a513" 00:20:13.319 ], 00:20:13.319 "product_name": "NVMe disk", 00:20:13.319 "block_size": 4096, 00:20:13.319 "num_blocks": 1310720, 00:20:13.319 "uuid": "7f2e19a9-2caa-43e5-831a-88472638a513", 00:20:13.319 "assigned_rate_limits": { 00:20:13.319 "rw_ios_per_sec": 0, 00:20:13.319 "rw_mbytes_per_sec": 0, 00:20:13.319 "r_mbytes_per_sec": 0, 00:20:13.320 "w_mbytes_per_sec": 0 00:20:13.320 }, 00:20:13.320 "claimed": true, 00:20:13.320 "claim_type": "read_many_write_one", 00:20:13.320 "zoned": false, 00:20:13.320 "supported_io_types": { 00:20:13.320 "read": true, 00:20:13.320 "write": true, 00:20:13.320 "unmap": true, 00:20:13.320 "write_zeroes": true, 00:20:13.320 "flush": true, 00:20:13.320 "reset": true, 00:20:13.320 "compare": true, 00:20:13.320 "compare_and_write": false, 00:20:13.320 "abort": true, 00:20:13.320 "nvme_admin": true, 00:20:13.320 "nvme_io": true 00:20:13.320 }, 00:20:13.320 "driver_specific": { 00:20:13.320 "nvme": [ 00:20:13.320 { 00:20:13.320 "pci_address": "0000:00:07.0", 00:20:13.320 "trid": { 00:20:13.320 "trtype": "PCIe", 00:20:13.320 "traddr": "0000:00:07.0" 00:20:13.320 }, 00:20:13.320 "ctrlr_data": { 00:20:13.320 "cntlid": 0, 00:20:13.320 "vendor_id": "0x1b36", 00:20:13.320 "model_number": "QEMU NVMe Ctrl", 00:20:13.320 "serial_number": "12341", 00:20:13.320 "firmware_revision": "8.0.0", 00:20:13.320 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:13.320 "oacs": { 00:20:13.320 "security": 0, 00:20:13.320 "format": 1, 00:20:13.320 "firmware": 0, 00:20:13.320 "ns_manage": 1 00:20:13.320 }, 00:20:13.320 "multi_ctrlr": false, 00:20:13.320 "ana_reporting": false 00:20:13.320 }, 00:20:13.320 "vs": { 00:20:13.320 "nvme_version": "1.4" 00:20:13.320 }, 00:20:13.320 "ns_data": { 00:20:13.320 "id": 1, 00:20:13.320 "can_share": false 00:20:13.320 } 00:20:13.320 } 00:20:13.320 ], 00:20:13.320 "mp_policy": "active_passive" 00:20:13.320 } 00:20:13.320 } 00:20:13.320 ]' 00:20:13.320 05:17:32 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:13.320 05:17:32 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:13.320 05:17:32 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:13.320 05:17:32 -- common/autotest_common.sh@1363 -- # nb=1310720 00:20:13.320 05:17:32 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:20:13.320 05:17:32 -- common/autotest_common.sh@1367 -- # echo 5120 00:20:13.320 05:17:32 -- ftl/common.sh@63 -- # base_size=5120 00:20:13.320 05:17:32 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:13.320 05:17:32 -- ftl/common.sh@67 -- # clear_lvols 00:20:13.320 05:17:32 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:13.320 05:17:32 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:13.579 05:17:32 -- ftl/common.sh@28 -- # stores=7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb 00:20:13.579 05:17:32 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:13.579 05:17:32 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7673dc2c-bb0b-4706-9e19-2cd7b2ab54cb 00:20:13.838 05:17:32 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:13.839 05:17:32 -- ftl/common.sh@68 -- # lvs=ddf58bfc-49c4-4f62-9511-08d0873d51e1 00:20:13.839 05:17:32 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ddf58bfc-49c4-4f62-9511-08d0873d51e1 00:20:14.098 05:17:33 -- ftl/restore.sh@43 -- # split_bdev=41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.098 05:17:33 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:20:14.098 05:17:33 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.098 05:17:33 -- ftl/common.sh@35 -- # local name=nvc0 00:20:14.098 05:17:33 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:20:14.098 05:17:33 -- ftl/common.sh@37 -- # local base_bdev=41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.098 05:17:33 -- ftl/common.sh@38 -- # local cache_size= 00:20:14.098 05:17:33 -- ftl/common.sh@41 -- # get_bdev_size 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.098 05:17:33 -- common/autotest_common.sh@1357 -- # local bdev_name=41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.098 05:17:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:14.098 05:17:33 -- common/autotest_common.sh@1359 -- # local bs 00:20:14.098 05:17:33 -- common/autotest_common.sh@1360 -- # local nb 00:20:14.098 05:17:33 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.359 05:17:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:14.359 { 00:20:14.359 "name": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:14.359 "aliases": [ 00:20:14.359 "lvs/nvme0n1p0" 00:20:14.359 ], 00:20:14.359 "product_name": "Logical Volume", 00:20:14.359 "block_size": 4096, 00:20:14.359 "num_blocks": 26476544, 00:20:14.359 "uuid": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:14.359 "assigned_rate_limits": { 00:20:14.359 "rw_ios_per_sec": 0, 00:20:14.359 "rw_mbytes_per_sec": 0, 00:20:14.359 "r_mbytes_per_sec": 0, 00:20:14.359 "w_mbytes_per_sec": 0 00:20:14.359 }, 00:20:14.359 "claimed": false, 00:20:14.359 "zoned": false, 00:20:14.359 "supported_io_types": { 00:20:14.359 "read": true, 00:20:14.359 "write": true, 00:20:14.359 "unmap": true, 00:20:14.359 "write_zeroes": true, 00:20:14.359 "flush": false, 00:20:14.359 "reset": true, 00:20:14.359 "compare": false, 00:20:14.359 "compare_and_write": false, 00:20:14.359 "abort": false, 00:20:14.359 "nvme_admin": false, 00:20:14.359 "nvme_io": false 00:20:14.359 }, 00:20:14.359 "driver_specific": { 00:20:14.359 "lvol": { 00:20:14.359 "lvol_store_uuid": "ddf58bfc-49c4-4f62-9511-08d0873d51e1", 00:20:14.359 "base_bdev": "nvme0n1", 00:20:14.359 "thin_provision": true, 00:20:14.359 "snapshot": false, 00:20:14.359 "clone": false, 00:20:14.359 "esnap_clone": false 00:20:14.359 } 00:20:14.359 } 00:20:14.359 } 00:20:14.359 ]' 00:20:14.359 05:17:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:14.359 05:17:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:14.359 05:17:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:14.359 05:17:33 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:14.359 05:17:33 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:14.359 05:17:33 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:14.359 05:17:33 -- ftl/common.sh@41 -- # local base_size=5171 00:20:14.359 05:17:33 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:14.634 05:17:33 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:20:14.634 05:17:33 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:14.634 05:17:33 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:14.634 05:17:33 -- ftl/common.sh@48 -- # get_bdev_size 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.634 05:17:33 -- common/autotest_common.sh@1357 -- # local bdev_name=41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.634 05:17:33 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:14.634 05:17:33 -- common/autotest_common.sh@1359 -- # local bs 00:20:14.634 05:17:33 -- common/autotest_common.sh@1360 -- # local nb 00:20:14.634 05:17:33 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:14.912 05:17:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:14.912 { 00:20:14.912 "name": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:14.912 "aliases": [ 00:20:14.912 "lvs/nvme0n1p0" 00:20:14.912 ], 00:20:14.912 "product_name": "Logical Volume", 00:20:14.912 "block_size": 4096, 00:20:14.912 "num_blocks": 26476544, 00:20:14.912 "uuid": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:14.912 "assigned_rate_limits": { 00:20:14.912 "rw_ios_per_sec": 0, 00:20:14.912 "rw_mbytes_per_sec": 0, 00:20:14.912 "r_mbytes_per_sec": 0, 00:20:14.912 "w_mbytes_per_sec": 0 00:20:14.912 }, 00:20:14.912 "claimed": false, 00:20:14.912 "zoned": false, 00:20:14.912 "supported_io_types": { 00:20:14.912 "read": true, 00:20:14.912 "write": true, 00:20:14.912 "unmap": true, 00:20:14.912 "write_zeroes": true, 00:20:14.912 "flush": false, 00:20:14.912 "reset": true, 00:20:14.912 "compare": false, 00:20:14.912 "compare_and_write": false, 00:20:14.912 "abort": false, 00:20:14.912 "nvme_admin": false, 00:20:14.912 "nvme_io": false 00:20:14.912 }, 00:20:14.912 "driver_specific": { 00:20:14.912 "lvol": { 00:20:14.912 "lvol_store_uuid": "ddf58bfc-49c4-4f62-9511-08d0873d51e1", 00:20:14.912 "base_bdev": "nvme0n1", 00:20:14.912 "thin_provision": true, 00:20:14.912 "snapshot": false, 00:20:14.912 "clone": false, 00:20:14.912 "esnap_clone": false 00:20:14.912 } 00:20:14.912 } 00:20:14.912 } 00:20:14.912 ]' 00:20:14.912 05:17:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:14.912 05:17:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:14.912 05:17:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:14.912 05:17:33 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:14.912 05:17:33 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:14.912 05:17:33 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:14.912 05:17:33 -- ftl/common.sh@48 -- # cache_size=5171 00:20:14.912 05:17:33 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:15.179 05:17:34 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:15.179 05:17:34 -- ftl/restore.sh@48 -- # get_bdev_size 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:15.179 05:17:34 -- common/autotest_common.sh@1357 -- # local bdev_name=41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:15.179 05:17:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:20:15.179 05:17:34 -- common/autotest_common.sh@1359 -- # local bs 00:20:15.179 05:17:34 -- common/autotest_common.sh@1360 -- # local nb 00:20:15.179 05:17:34 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 00:20:15.437 05:17:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:20:15.437 { 00:20:15.437 "name": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:15.437 "aliases": [ 00:20:15.437 "lvs/nvme0n1p0" 00:20:15.437 ], 00:20:15.437 "product_name": "Logical Volume", 00:20:15.437 "block_size": 4096, 00:20:15.437 "num_blocks": 26476544, 00:20:15.437 "uuid": "41dbcb7a-8c57-49e2-8f0e-4ba56ac30966", 00:20:15.437 "assigned_rate_limits": { 00:20:15.437 "rw_ios_per_sec": 0, 00:20:15.437 "rw_mbytes_per_sec": 0, 00:20:15.437 "r_mbytes_per_sec": 0, 00:20:15.437 "w_mbytes_per_sec": 0 00:20:15.437 }, 00:20:15.437 "claimed": false, 00:20:15.437 "zoned": false, 00:20:15.437 "supported_io_types": { 00:20:15.437 "read": true, 00:20:15.437 "write": true, 00:20:15.437 "unmap": true, 00:20:15.437 "write_zeroes": true, 00:20:15.437 "flush": false, 00:20:15.437 "reset": true, 00:20:15.437 "compare": false, 00:20:15.437 "compare_and_write": false, 00:20:15.437 "abort": false, 00:20:15.437 "nvme_admin": false, 00:20:15.437 "nvme_io": false 00:20:15.437 }, 00:20:15.437 "driver_specific": { 00:20:15.437 "lvol": { 00:20:15.437 "lvol_store_uuid": "ddf58bfc-49c4-4f62-9511-08d0873d51e1", 00:20:15.437 "base_bdev": "nvme0n1", 00:20:15.437 "thin_provision": true, 00:20:15.437 "snapshot": false, 00:20:15.437 "clone": false, 00:20:15.437 "esnap_clone": false 00:20:15.437 } 00:20:15.437 } 00:20:15.437 } 00:20:15.437 ]' 00:20:15.437 05:17:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:20:15.437 05:17:34 -- common/autotest_common.sh@1362 -- # bs=4096 00:20:15.437 05:17:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:20:15.437 05:17:34 -- common/autotest_common.sh@1363 -- # nb=26476544 00:20:15.438 05:17:34 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:20:15.438 05:17:34 -- common/autotest_common.sh@1367 -- # echo 103424 00:20:15.438 05:17:34 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:15.438 05:17:34 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 --l2p_dram_limit 10' 00:20:15.438 05:17:34 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:15.438 05:17:34 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:15.438 05:17:34 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:15.438 05:17:34 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:15.438 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:15.438 05:17:34 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41dbcb7a-8c57-49e2-8f0e-4ba56ac30966 --l2p_dram_limit 10 -c nvc0n1p0 00:20:15.697 [2024-07-26 05:17:34.616616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.616669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:15.697 [2024-07-26 05:17:34.616704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:15.697 [2024-07-26 05:17:34.616715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.616789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.616801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.697 [2024-07-26 05:17:34.616815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:15.697 [2024-07-26 05:17:34.616825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.616850] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:15.697 [2024-07-26 05:17:34.618100] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:15.697 [2024-07-26 05:17:34.618139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.618151] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.697 [2024-07-26 05:17:34.618165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:20:15.697 [2024-07-26 05:17:34.618176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.618390] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:20:15.697 [2024-07-26 05:17:34.619811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.619844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:15.697 [2024-07-26 05:17:34.619858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:15.697 [2024-07-26 05:17:34.619871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.627269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.627305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.697 [2024-07-26 05:17:34.627318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.355 ms 00:20:15.697 [2024-07-26 05:17:34.627331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.627428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.627445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.697 [2024-07-26 05:17:34.627456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:15.697 [2024-07-26 05:17:34.627473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.627534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.627548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:15.697 [2024-07-26 05:17:34.627560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:15.697 [2024-07-26 05:17:34.627575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.627603] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.697 [2024-07-26 05:17:34.633861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.633898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.697 [2024-07-26 05:17:34.633913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.265 ms 00:20:15.697 [2024-07-26 05:17:34.633924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.633965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.697 [2024-07-26 05:17:34.633976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:15.697 [2024-07-26 05:17:34.633989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:15.697 [2024-07-26 05:17:34.634000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.697 [2024-07-26 05:17:34.634045] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:15.697 [2024-07-26 05:17:34.634158] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:15.697 [2024-07-26 05:17:34.634178] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:15.697 [2024-07-26 05:17:34.634192] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:15.697 [2024-07-26 05:17:34.634222] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:15.697 [2024-07-26 05:17:34.634236] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634250] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:15.698 [2024-07-26 05:17:34.634260] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:15.698 [2024-07-26 05:17:34.634273] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:15.698 [2024-07-26 05:17:34.634287] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:15.698 [2024-07-26 05:17:34.634301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.698 [2024-07-26 05:17:34.634311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:15.698 [2024-07-26 05:17:34.634354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:15.698 [2024-07-26 05:17:34.634365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.698 [2024-07-26 05:17:34.634426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.698 [2024-07-26 05:17:34.634437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:15.698 [2024-07-26 05:17:34.634450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:15.698 [2024-07-26 05:17:34.634460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.698 [2024-07-26 05:17:34.634535] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:15.698 [2024-07-26 05:17:34.634547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:15.698 [2024-07-26 05:17:34.634561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:15.698 [2024-07-26 05:17:34.634595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634616] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:15.698 [2024-07-26 05:17:34.634628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.698 [2024-07-26 05:17:34.634649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:15.698 [2024-07-26 05:17:34.634658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:15.698 [2024-07-26 05:17:34.634671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:15.698 [2024-07-26 05:17:34.634680] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:15.698 [2024-07-26 05:17:34.634692] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:15.698 [2024-07-26 05:17:34.634701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:15.698 [2024-07-26 05:17:34.634725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:15.698 [2024-07-26 05:17:34.634736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:15.698 [2024-07-26 05:17:34.634757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:15.698 [2024-07-26 05:17:34.634766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634777] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:15.698 [2024-07-26 05:17:34.634787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:15.698 [2024-07-26 05:17:34.634819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634839] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:15.698 [2024-07-26 05:17:34.634849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:15.698 [2024-07-26 05:17:34.634885] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:15.698 [2024-07-26 05:17:34.634906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:15.698 [2024-07-26 05:17:34.634915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:15.698 [2024-07-26 05:17:34.634926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.698 [2024-07-26 05:17:34.634935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:15.698 [2024-07-26 05:17:34.634948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:15.698 [2024-07-26 05:17:34.634958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:15.698 [2024-07-26 05:17:34.634969] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:15.698 [2024-07-26 05:17:34.634980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:15.698 [2024-07-26 05:17:34.634992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:15.698 [2024-07-26 05:17:34.635002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:15.698 [2024-07-26 05:17:34.635014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:15.698 [2024-07-26 05:17:34.635023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:15.698 [2024-07-26 05:17:34.635035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:15.698 [2024-07-26 05:17:34.635044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:15.698 [2024-07-26 05:17:34.635058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:15.698 [2024-07-26 05:17:34.635068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:15.698 [2024-07-26 05:17:34.635081] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:15.698 [2024-07-26 05:17:34.635093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.698 [2024-07-26 05:17:34.635110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:15.698 [2024-07-26 05:17:34.635121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:15.698 [2024-07-26 05:17:34.635134] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:15.698 [2024-07-26 05:17:34.635145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:15.698 [2024-07-26 05:17:34.635157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:15.698 [2024-07-26 05:17:34.635168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:15.698 [2024-07-26 05:17:34.635181] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:15.698 [2024-07-26 05:17:34.635191] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:15.698 [2024-07-26 05:17:34.635214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:15.698 [2024-07-26 05:17:34.635226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:15.698 [2024-07-26 05:17:34.635238] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:15.698 [2024-07-26 05:17:34.635249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:15.698 [2024-07-26 05:17:34.635267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:15.698 [2024-07-26 05:17:34.635277] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:15.698 [2024-07-26 05:17:34.635292] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:15.698 [2024-07-26 05:17:34.635303] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:15.698 [2024-07-26 05:17:34.635316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:15.698 [2024-07-26 05:17:34.635327] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:15.698 [2024-07-26 05:17:34.635340] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:15.698 [2024-07-26 05:17:34.635351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.698 [2024-07-26 05:17:34.635363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:15.698 [2024-07-26 05:17:34.635374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:20:15.698 [2024-07-26 05:17:34.635386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.698 [2024-07-26 05:17:34.660660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.698 [2024-07-26 05:17:34.660698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.698 [2024-07-26 05:17:34.660712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.230 ms 00:20:15.699 [2024-07-26 05:17:34.660724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.660824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.660840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.699 [2024-07-26 05:17:34.660851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:15.699 [2024-07-26 05:17:34.660873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.716046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.716085] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.699 [2024-07-26 05:17:34.716115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.118 ms 00:20:15.699 [2024-07-26 05:17:34.716128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.716163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.716180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.699 [2024-07-26 05:17:34.716191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:15.699 [2024-07-26 05:17:34.716203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.716691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.716711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.699 [2024-07-26 05:17:34.716722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:20:15.699 [2024-07-26 05:17:34.716735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.716840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.716859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.699 [2024-07-26 05:17:34.716870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:15.699 [2024-07-26 05:17:34.716882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.741559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.741597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.699 [2024-07-26 05:17:34.741612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:20:15.699 [2024-07-26 05:17:34.741626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.699 [2024-07-26 05:17:34.755563] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:15.699 [2024-07-26 05:17:34.758815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.699 [2024-07-26 05:17:34.758848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.699 [2024-07-26 05:17:34.758863] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.094 ms 00:20:15.699 [2024-07-26 05:17:34.758873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.958 [2024-07-26 05:17:34.848534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.958 [2024-07-26 05:17:34.848602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:15.958 [2024-07-26 05:17:34.848622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.623 ms 00:20:15.958 [2024-07-26 05:17:34.848632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.958 [2024-07-26 05:17:34.848686] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:15.958 [2024-07-26 05:17:34.848701] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:19.244 [2024-07-26 05:17:37.681195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.681290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:19.244 [2024-07-26 05:17:37.681313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2832.485 ms 00:20:19.244 [2024-07-26 05:17:37.681323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.681539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.681552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.244 [2024-07-26 05:17:37.681567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:19.244 [2024-07-26 05:17:37.681578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.719752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.719803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:19.244 [2024-07-26 05:17:37.719822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.112 ms 00:20:19.244 [2024-07-26 05:17:37.719832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.757904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.757940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:19.244 [2024-07-26 05:17:37.757962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.024 ms 00:20:19.244 [2024-07-26 05:17:37.757972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.758417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.758432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.244 [2024-07-26 05:17:37.758446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:19.244 [2024-07-26 05:17:37.758456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.855795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.855837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:19.244 [2024-07-26 05:17:37.855855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.284 ms 00:20:19.244 [2024-07-26 05:17:37.855865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.895007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.895045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:19.244 [2024-07-26 05:17:37.895078] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.094 ms 00:20:19.244 [2024-07-26 05:17:37.895092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.244 [2024-07-26 05:17:37.897276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.244 [2024-07-26 05:17:37.897322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:19.244 [2024-07-26 05:17:37.897339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:20:19.245 [2024-07-26 05:17:37.897349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.245 [2024-07-26 05:17:37.935420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.245 [2024-07-26 05:17:37.935455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.245 [2024-07-26 05:17:37.935471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.997 ms 00:20:19.245 [2024-07-26 05:17:37.935497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.245 [2024-07-26 05:17:37.935551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.245 [2024-07-26 05:17:37.935565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.245 [2024-07-26 05:17:37.935578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:19.245 [2024-07-26 05:17:37.935588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.245 [2024-07-26 05:17:37.935689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.245 [2024-07-26 05:17:37.935702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.245 [2024-07-26 05:17:37.935717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:19.245 [2024-07-26 05:17:37.935727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.245 [2024-07-26 05:17:37.937041] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3319.967 ms, result 0 00:20:19.245 { 00:20:19.245 "name": "ftl0", 00:20:19.245 "uuid": "2f06a714-f9d0-4ac9-8874-e98b3d8385dd" 00:20:19.245 } 00:20:19.245 05:17:37 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:19.245 05:17:37 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:19.245 05:17:38 -- ftl/restore.sh@63 -- # echo ']}' 00:20:19.245 05:17:38 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:19.504 [2024-07-26 05:17:38.448318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.448382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.504 [2024-07-26 05:17:38.448398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.504 [2024-07-26 05:17:38.448411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.448450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.504 [2024-07-26 05:17:38.452152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.452181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.504 [2024-07-26 05:17:38.452197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:20:19.504 [2024-07-26 05:17:38.452222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.452491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.452505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.504 [2024-07-26 05:17:38.452522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:20:19.504 [2024-07-26 05:17:38.452532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.455133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.455155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.504 [2024-07-26 05:17:38.455169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.568 ms 00:20:19.504 [2024-07-26 05:17:38.455179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.460337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.460369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:19.504 [2024-07-26 05:17:38.460384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:20:19.504 [2024-07-26 05:17:38.460397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.498846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.498883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.504 [2024-07-26 05:17:38.498900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.368 ms 00:20:19.504 [2024-07-26 05:17:38.498910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.521970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.522017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.504 [2024-07-26 05:17:38.522036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:20:19.504 [2024-07-26 05:17:38.522047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.522202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.522254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.504 [2024-07-26 05:17:38.522270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:19.504 [2024-07-26 05:17:38.522281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.560747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.560786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:19.504 [2024-07-26 05:17:38.560802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.423 ms 00:20:19.504 [2024-07-26 05:17:38.560829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.504 [2024-07-26 05:17:38.598890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.504 [2024-07-26 05:17:38.598926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:19.504 [2024-07-26 05:17:38.598942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.012 ms 00:20:19.504 [2024-07-26 05:17:38.598952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.764 [2024-07-26 05:17:38.636258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.764 [2024-07-26 05:17:38.636292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.764 [2024-07-26 05:17:38.636308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.260 ms 00:20:19.764 [2024-07-26 05:17:38.636318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.764 [2024-07-26 05:17:38.673551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.764 [2024-07-26 05:17:38.673587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.764 [2024-07-26 05:17:38.673603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.134 ms 00:20:19.764 [2024-07-26 05:17:38.673629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.764 [2024-07-26 05:17:38.673674] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.764 [2024-07-26 05:17:38.673691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.673990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.764 [2024-07-26 05:17:38.674081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.765 [2024-07-26 05:17:38.674968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.765 [2024-07-26 05:17:38.674983] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:20:19.765 [2024-07-26 05:17:38.674994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.765 [2024-07-26 05:17:38.675007] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.765 [2024-07-26 05:17:38.675016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.765 [2024-07-26 05:17:38.675029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.765 [2024-07-26 05:17:38.675039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.765 [2024-07-26 05:17:38.675052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.765 [2024-07-26 05:17:38.675062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.765 [2024-07-26 05:17:38.675073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.765 [2024-07-26 05:17:38.675082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.765 [2024-07-26 05:17:38.675097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.765 [2024-07-26 05:17:38.675108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.765 [2024-07-26 05:17:38.675121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:20:19.765 [2024-07-26 05:17:38.675131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.765 [2024-07-26 05:17:38.694844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.765 [2024-07-26 05:17:38.694876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.765 [2024-07-26 05:17:38.694891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.657 ms 00:20:19.765 [2024-07-26 05:17:38.694917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.765 [2024-07-26 05:17:38.695151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.765 [2024-07-26 05:17:38.695162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.765 [2024-07-26 05:17:38.695175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:19.765 [2024-07-26 05:17:38.695187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.766 [2024-07-26 05:17:38.765311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.766 [2024-07-26 05:17:38.765348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.766 [2024-07-26 05:17:38.765365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.766 [2024-07-26 05:17:38.765376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.766 [2024-07-26 05:17:38.765441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.766 [2024-07-26 05:17:38.765452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.766 [2024-07-26 05:17:38.765465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.766 [2024-07-26 05:17:38.765478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.766 [2024-07-26 05:17:38.765559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.766 [2024-07-26 05:17:38.765573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.766 [2024-07-26 05:17:38.765585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.766 [2024-07-26 05:17:38.765595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.766 [2024-07-26 05:17:38.765617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.766 [2024-07-26 05:17:38.765628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.766 [2024-07-26 05:17:38.765640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.766 [2024-07-26 05:17:38.765650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.889218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.889282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.025 [2024-07-26 05:17:38.889324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.889335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.025 [2024-07-26 05:17:38.937136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.025 [2024-07-26 05:17:38.937310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.025 [2024-07-26 05:17:38.937401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.025 [2024-07-26 05:17:38.937555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.025 [2024-07-26 05:17:38.937634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.025 [2024-07-26 05:17:38.937712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.025 [2024-07-26 05:17:38.937782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.025 [2024-07-26 05:17:38.937795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.025 [2024-07-26 05:17:38.937805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.025 [2024-07-26 05:17:38.937942] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.579 ms, result 0 00:20:20.025 true 00:20:20.025 05:17:38 -- ftl/restore.sh@66 -- # killprocess 74118 00:20:20.025 05:17:38 -- common/autotest_common.sh@926 -- # '[' -z 74118 ']' 00:20:20.025 05:17:38 -- common/autotest_common.sh@930 -- # kill -0 74118 00:20:20.025 05:17:38 -- common/autotest_common.sh@931 -- # uname 00:20:20.025 05:17:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:20.025 05:17:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74118 00:20:20.025 05:17:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:20.026 05:17:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:20.026 05:17:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74118' 00:20:20.026 killing process with pid 74118 00:20:20.026 05:17:38 -- common/autotest_common.sh@945 -- # kill 74118 00:20:20.026 05:17:38 -- common/autotest_common.sh@950 -- # wait 74118 00:20:25.298 05:17:44 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:29.487 262144+0 records in 00:20:29.487 262144+0 records out 00:20:29.487 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.07125 s, 264 MB/s 00:20:29.487 05:17:48 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:30.862 05:17:49 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.121 [2024-07-26 05:17:50.068722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:31.121 [2024-07-26 05:17:50.068876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74377 ] 00:20:31.380 [2024-07-26 05:17:50.252380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.638 [2024-07-26 05:17:50.528590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.897 [2024-07-26 05:17:50.937189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.897 [2024-07-26 05:17:50.937274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.158 [2024-07-26 05:17:51.092326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.092375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.158 [2024-07-26 05:17:51.092391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.158 [2024-07-26 05:17:51.092401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.092449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.092461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.158 [2024-07-26 05:17:51.092472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:32.158 [2024-07-26 05:17:51.092481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.092501] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.158 [2024-07-26 05:17:51.093621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.158 [2024-07-26 05:17:51.093651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.093662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.158 [2024-07-26 05:17:51.093674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:20:32.158 [2024-07-26 05:17:51.093684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.095109] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:32.158 [2024-07-26 05:17:51.115485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.115653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:32.158 [2024-07-26 05:17:51.115740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.377 ms 00:20:32.158 [2024-07-26 05:17:51.115776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.115853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.115893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:32.158 [2024-07-26 05:17:51.115924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:32.158 [2024-07-26 05:17:51.116011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.122865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.123009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.158 [2024-07-26 05:17:51.123141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.753 ms 00:20:32.158 [2024-07-26 05:17:51.123178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.123301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.123389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.158 [2024-07-26 05:17:51.123425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:32.158 [2024-07-26 05:17:51.123455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.123562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.123605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.158 [2024-07-26 05:17:51.123636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.158 [2024-07-26 05:17:51.123665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.123716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.158 [2024-07-26 05:17:51.129936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.130065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.158 [2024-07-26 05:17:51.130198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.230 ms 00:20:32.158 [2024-07-26 05:17:51.130248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.130310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.130394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.158 [2024-07-26 05:17:51.130449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.158 [2024-07-26 05:17:51.130478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.130548] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:32.158 [2024-07-26 05:17:51.130599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:32.158 [2024-07-26 05:17:51.130669] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:32.158 [2024-07-26 05:17:51.130774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:32.158 [2024-07-26 05:17:51.130844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:32.158 [2024-07-26 05:17:51.130857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.158 [2024-07-26 05:17:51.130870] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:32.158 [2024-07-26 05:17:51.130883] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.158 [2024-07-26 05:17:51.130895] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.158 [2024-07-26 05:17:51.130911] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:32.158 [2024-07-26 05:17:51.130921] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.158 [2024-07-26 05:17:51.130931] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:32.158 [2024-07-26 05:17:51.130940] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:32.158 [2024-07-26 05:17:51.130951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.130962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.158 [2024-07-26 05:17:51.130973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:20:32.158 [2024-07-26 05:17:51.130983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.131047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.158 [2024-07-26 05:17:51.131059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.158 [2024-07-26 05:17:51.131071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:32.158 [2024-07-26 05:17:51.131081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.158 [2024-07-26 05:17:51.131143] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.158 [2024-07-26 05:17:51.131155] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.158 [2024-07-26 05:17:51.131165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.158 [2024-07-26 05:17:51.131176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.158 [2024-07-26 05:17:51.131186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.158 [2024-07-26 05:17:51.131195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.158 [2024-07-26 05:17:51.131221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:32.158 [2024-07-26 05:17:51.131231] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.158 [2024-07-26 05:17:51.131241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:32.158 [2024-07-26 05:17:51.131250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.158 [2024-07-26 05:17:51.131259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.158 [2024-07-26 05:17:51.131268] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:32.158 [2024-07-26 05:17:51.131277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.158 [2024-07-26 05:17:51.131287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.159 [2024-07-26 05:17:51.131296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:32.159 [2024-07-26 05:17:51.131307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.159 [2024-07-26 05:17:51.131326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:32.159 [2024-07-26 05:17:51.131335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:32.159 [2024-07-26 05:17:51.131363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:32.159 [2024-07-26 05:17:51.131392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131406] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.159 [2024-07-26 05:17:51.131422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131451] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.159 [2024-07-26 05:17:51.131466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131497] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.159 [2024-07-26 05:17:51.131514] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131544] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.159 [2024-07-26 05:17:51.131562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131596] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.159 [2024-07-26 05:17:51.131614] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.159 [2024-07-26 05:17:51.131647] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.159 [2024-07-26 05:17:51.131665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:32.159 [2024-07-26 05:17:51.131682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.159 [2024-07-26 05:17:51.131699] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.159 [2024-07-26 05:17:51.131717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.159 [2024-07-26 05:17:51.131732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.159 [2024-07-26 05:17:51.131773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.159 [2024-07-26 05:17:51.131790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.159 [2024-07-26 05:17:51.131807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.159 [2024-07-26 05:17:51.131827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.159 [2024-07-26 05:17:51.131845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.159 [2024-07-26 05:17:51.131862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.159 [2024-07-26 05:17:51.131881] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.159 [2024-07-26 05:17:51.131908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.159 [2024-07-26 05:17:51.131928] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:32.159 [2024-07-26 05:17:51.131948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:32.159 [2024-07-26 05:17:51.131970] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:32.159 [2024-07-26 05:17:51.131990] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:32.159 [2024-07-26 05:17:51.132008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:32.159 [2024-07-26 05:17:51.132025] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:32.159 [2024-07-26 05:17:51.132043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:32.159 [2024-07-26 05:17:51.132062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:32.159 [2024-07-26 05:17:51.132081] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:32.159 [2024-07-26 05:17:51.132100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:32.159 [2024-07-26 05:17:51.132119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:32.159 [2024-07-26 05:17:51.132138] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:32.159 [2024-07-26 05:17:51.132158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:32.159 [2024-07-26 05:17:51.132177] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.159 [2024-07-26 05:17:51.132202] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.159 [2024-07-26 05:17:51.132237] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.159 [2024-07-26 05:17:51.132260] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.159 [2024-07-26 05:17:51.132279] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.159 [2024-07-26 05:17:51.132297] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.159 [2024-07-26 05:17:51.132317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.132335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.159 [2024-07-26 05:17:51.132353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:20:32.159 [2024-07-26 05:17:51.132369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.158895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.158928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.159 [2024-07-26 05:17:51.158942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.450 ms 00:20:32.159 [2024-07-26 05:17:51.158952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.159030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.159044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.159 [2024-07-26 05:17:51.159054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:32.159 [2024-07-26 05:17:51.159063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.221889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.221928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.159 [2024-07-26 05:17:51.221942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.772 ms 00:20:32.159 [2024-07-26 05:17:51.221955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.221996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.222007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.159 [2024-07-26 05:17:51.222017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:32.159 [2024-07-26 05:17:51.222027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.222505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.222519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.159 [2024-07-26 05:17:51.222529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:20:32.159 [2024-07-26 05:17:51.222539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.222649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.222668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.159 [2024-07-26 05:17:51.222678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:32.159 [2024-07-26 05:17:51.222688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.159 [2024-07-26 05:17:51.246021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.159 [2024-07-26 05:17:51.246060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.160 [2024-07-26 05:17:51.246074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.311 ms 00:20:32.160 [2024-07-26 05:17:51.246085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.266628] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:32.419 [2024-07-26 05:17:51.266669] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.419 [2024-07-26 05:17:51.266686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.266699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.419 [2024-07-26 05:17:51.266714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.488 ms 00:20:32.419 [2024-07-26 05:17:51.266726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.297789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.297831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.419 [2024-07-26 05:17:51.297845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.994 ms 00:20:32.419 [2024-07-26 05:17:51.297856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.317851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.317891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.419 [2024-07-26 05:17:51.317903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.945 ms 00:20:32.419 [2024-07-26 05:17:51.317913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.337013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.337049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.419 [2024-07-26 05:17:51.337062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.060 ms 00:20:32.419 [2024-07-26 05:17:51.337072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.337627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.337649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.419 [2024-07-26 05:17:51.337660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:20:32.419 [2024-07-26 05:17:51.337670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.431097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.431159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.419 [2024-07-26 05:17:51.431176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.406 ms 00:20:32.419 [2024-07-26 05:17:51.431202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.443690] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:32.419 [2024-07-26 05:17:51.446654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.446680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.419 [2024-07-26 05:17:51.446709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.385 ms 00:20:32.419 [2024-07-26 05:17:51.446719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.446800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.446813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.419 [2024-07-26 05:17:51.446827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:32.419 [2024-07-26 05:17:51.446836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.446904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.446915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.419 [2024-07-26 05:17:51.446925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:32.419 [2024-07-26 05:17:51.446934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.448960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.448989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:32.419 [2024-07-26 05:17:51.449000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:20:32.419 [2024-07-26 05:17:51.449015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.449042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.449053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.419 [2024-07-26 05:17:51.449064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.419 [2024-07-26 05:17:51.449073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.419 [2024-07-26 05:17:51.449113] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.419 [2024-07-26 05:17:51.449125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.419 [2024-07-26 05:17:51.449135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.420 [2024-07-26 05:17:51.449145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:32.420 [2024-07-26 05:17:51.449154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.420 [2024-07-26 05:17:51.487647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.420 [2024-07-26 05:17:51.487696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.420 [2024-07-26 05:17:51.487711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.470 ms 00:20:32.420 [2024-07-26 05:17:51.487721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.420 [2024-07-26 05:17:51.487805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.420 [2024-07-26 05:17:51.487817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.420 [2024-07-26 05:17:51.487827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:32.420 [2024-07-26 05:17:51.487844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.420 [2024-07-26 05:17:51.489027] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.238 ms, result 0 00:21:04.616  Copying: 31/1024 [MB] (31 MBps) Copying: 62/1024 [MB] (31 MBps) Copying: 95/1024 [MB] (32 MBps) Copying: 127/1024 [MB] (32 MBps) Copying: 157/1024 [MB] (29 MBps) Copying: 189/1024 [MB] (31 MBps) Copying: 221/1024 [MB] (32 MBps) Copying: 254/1024 [MB] (32 MBps) Copying: 286/1024 [MB] (32 MBps) Copying: 320/1024 [MB] (33 MBps) Copying: 352/1024 [MB] (31 MBps) Copying: 384/1024 [MB] (31 MBps) Copying: 416/1024 [MB] (32 MBps) Copying: 447/1024 [MB] (31 MBps) Copying: 479/1024 [MB] (31 MBps) Copying: 511/1024 [MB] (32 MBps) Copying: 543/1024 [MB] (32 MBps) Copying: 575/1024 [MB] (31 MBps) Copying: 607/1024 [MB] (32 MBps) Copying: 639/1024 [MB] (31 MBps) Copying: 671/1024 [MB] (31 MBps) Copying: 703/1024 [MB] (32 MBps) Copying: 735/1024 [MB] (32 MBps) Copying: 768/1024 [MB] (32 MBps) Copying: 800/1024 [MB] (32 MBps) Copying: 833/1024 [MB] (32 MBps) Copying: 864/1024 [MB] (31 MBps) Copying: 897/1024 [MB] (32 MBps) Copying: 930/1024 [MB] (32 MBps) Copying: 962/1024 [MB] (32 MBps) Copying: 994/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-26 05:18:23.424784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.424836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:04.616 [2024-07-26 05:18:23.424853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:04.616 [2024-07-26 05:18:23.424863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.424884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:04.616 [2024-07-26 05:18:23.428821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.428854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:04.616 [2024-07-26 05:18:23.428867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:21:04.616 [2024-07-26 05:18:23.428876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.430855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.430893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:04.616 [2024-07-26 05:18:23.430906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:21:04.616 [2024-07-26 05:18:23.430916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.445416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.445455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:04.616 [2024-07-26 05:18:23.445470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:21:04.616 [2024-07-26 05:18:23.445481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.450794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.450834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:04.616 [2024-07-26 05:18:23.450846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.280 ms 00:21:04.616 [2024-07-26 05:18:23.450856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.489414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.489452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:04.616 [2024-07-26 05:18:23.489467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.513 ms 00:21:04.616 [2024-07-26 05:18:23.489476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.510563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.510600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:04.616 [2024-07-26 05:18:23.510625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.048 ms 00:21:04.616 [2024-07-26 05:18:23.510634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.510786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.510799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:04.616 [2024-07-26 05:18:23.510816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:04.616 [2024-07-26 05:18:23.510825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.549838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.549874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:04.616 [2024-07-26 05:18:23.549887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.997 ms 00:21:04.616 [2024-07-26 05:18:23.549897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.588503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.588540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:04.616 [2024-07-26 05:18:23.588553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.570 ms 00:21:04.616 [2024-07-26 05:18:23.588563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.625974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.626012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:04.616 [2024-07-26 05:18:23.626025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.374 ms 00:21:04.616 [2024-07-26 05:18:23.626034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.663702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.616 [2024-07-26 05:18:23.663737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:04.616 [2024-07-26 05:18:23.663749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.582 ms 00:21:04.616 [2024-07-26 05:18:23.663759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.616 [2024-07-26 05:18:23.663794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:04.616 [2024-07-26 05:18:23.663811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.663992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:04.616 [2024-07-26 05:18:23.664330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:04.617 [2024-07-26 05:18:23.664913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:04.617 [2024-07-26 05:18:23.664923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:21:04.617 [2024-07-26 05:18:23.664934] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:04.617 [2024-07-26 05:18:23.664949] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:04.617 [2024-07-26 05:18:23.664958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:04.617 [2024-07-26 05:18:23.664968] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:04.617 [2024-07-26 05:18:23.664978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:04.617 [2024-07-26 05:18:23.664988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:04.617 [2024-07-26 05:18:23.664998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:04.617 [2024-07-26 05:18:23.665007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:04.617 [2024-07-26 05:18:23.665016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:04.617 [2024-07-26 05:18:23.665026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.617 [2024-07-26 05:18:23.665036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:04.617 [2024-07-26 05:18:23.665046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:21:04.617 [2024-07-26 05:18:23.665056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-07-26 05:18:23.685193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.617 [2024-07-26 05:18:23.685235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:04.617 [2024-07-26 05:18:23.685264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:21:04.617 [2024-07-26 05:18:23.685274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.617 [2024-07-26 05:18:23.685567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:04.617 [2024-07-26 05:18:23.685581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:04.617 [2024-07-26 05:18:23.685593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:21:04.617 [2024-07-26 05:18:23.685603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.740371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.740410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:04.948 [2024-07-26 05:18:23.740422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.740433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.740487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.740497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:04.948 [2024-07-26 05:18:23.740507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.740517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.740594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.740607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:04.948 [2024-07-26 05:18:23.740617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.740626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.740642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.740653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:04.948 [2024-07-26 05:18:23.740662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.740672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.855531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.855588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:04.948 [2024-07-26 05:18:23.855602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.855628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.902686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.902733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:04.948 [2024-07-26 05:18:23.902747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.902774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.902850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.902867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:04.948 [2024-07-26 05:18:23.902877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.902886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.902929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.902940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:04.948 [2024-07-26 05:18:23.902949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.902959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.903062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.903079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:04.948 [2024-07-26 05:18:23.903089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.903099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.903136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.903148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:04.948 [2024-07-26 05:18:23.903158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.903167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.903201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.903211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:04.948 [2024-07-26 05:18:23.903246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.903256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.903313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:04.948 [2024-07-26 05:18:23.903325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:04.948 [2024-07-26 05:18:23.903335] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:04.948 [2024-07-26 05:18:23.903344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:04.948 [2024-07-26 05:18:23.903460] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.641 ms, result 0 00:21:06.854 00:21:06.854 00:21:06.854 05:18:25 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:06.854 [2024-07-26 05:18:25.902959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:06.854 [2024-07-26 05:18:25.903114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74736 ] 00:21:07.113 [2024-07-26 05:18:26.085516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.372 [2024-07-26 05:18:26.309922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.631 [2024-07-26 05:18:26.707394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.631 [2024-07-26 05:18:26.707460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:07.891 [2024-07-26 05:18:26.862263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.862318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:07.891 [2024-07-26 05:18:26.862334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:07.891 [2024-07-26 05:18:26.862344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.862394] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.862406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:07.891 [2024-07-26 05:18:26.862416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:07.891 [2024-07-26 05:18:26.862425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.862446] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:07.891 [2024-07-26 05:18:26.863621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:07.891 [2024-07-26 05:18:26.863647] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.863658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:07.891 [2024-07-26 05:18:26.863669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:21:07.891 [2024-07-26 05:18:26.863678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.865088] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:07.891 [2024-07-26 05:18:26.884771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.884808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:07.891 [2024-07-26 05:18:26.884826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.683 ms 00:21:07.891 [2024-07-26 05:18:26.884835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.884893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.884905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:07.891 [2024-07-26 05:18:26.884915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:07.891 [2024-07-26 05:18:26.884924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.891606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.891633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:07.891 [2024-07-26 05:18:26.891644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.616 ms 00:21:07.891 [2024-07-26 05:18:26.891653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.891738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.891750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:07.891 [2024-07-26 05:18:26.891760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:07.891 [2024-07-26 05:18:26.891769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.891806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.891821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:07.891 [2024-07-26 05:18:26.891831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:07.891 [2024-07-26 05:18:26.891840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.891866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:07.891 [2024-07-26 05:18:26.897468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.897499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:07.891 [2024-07-26 05:18:26.897511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.610 ms 00:21:07.891 [2024-07-26 05:18:26.897537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.897570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.897582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:07.891 [2024-07-26 05:18:26.897592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:07.891 [2024-07-26 05:18:26.897602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.897653] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:07.891 [2024-07-26 05:18:26.897687] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:07.891 [2024-07-26 05:18:26.897724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:07.891 [2024-07-26 05:18:26.897741] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:07.891 [2024-07-26 05:18:26.897808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:07.891 [2024-07-26 05:18:26.897821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:07.891 [2024-07-26 05:18:26.897834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:07.891 [2024-07-26 05:18:26.897847] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:07.891 [2024-07-26 05:18:26.897858] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:07.891 [2024-07-26 05:18:26.897873] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:07.891 [2024-07-26 05:18:26.897883] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:07.891 [2024-07-26 05:18:26.897892] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:07.891 [2024-07-26 05:18:26.897902] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:07.891 [2024-07-26 05:18:26.897912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.897922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:07.891 [2024-07-26 05:18:26.897932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:21:07.891 [2024-07-26 05:18:26.897942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.897996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.891 [2024-07-26 05:18:26.898007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:07.891 [2024-07-26 05:18:26.898020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:07.891 [2024-07-26 05:18:26.898030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.891 [2024-07-26 05:18:26.898095] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:07.891 [2024-07-26 05:18:26.898108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:07.891 [2024-07-26 05:18:26.898118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.891 [2024-07-26 05:18:26.898128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:07.891 [2024-07-26 05:18:26.898147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:07.891 [2024-07-26 05:18:26.898167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:07.891 [2024-07-26 05:18:26.898177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.891 [2024-07-26 05:18:26.898196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:07.891 [2024-07-26 05:18:26.898222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:07.891 [2024-07-26 05:18:26.898232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:07.891 [2024-07-26 05:18:26.898242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:07.891 [2024-07-26 05:18:26.898251] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:07.891 [2024-07-26 05:18:26.898260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898269] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:07.891 [2024-07-26 05:18:26.898278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:07.891 [2024-07-26 05:18:26.898287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:07.891 [2024-07-26 05:18:26.898306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:07.891 [2024-07-26 05:18:26.898325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:07.891 [2024-07-26 05:18:26.898335] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:07.891 [2024-07-26 05:18:26.898344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:07.891 [2024-07-26 05:18:26.898361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:07.891 [2024-07-26 05:18:26.898370] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:07.891 [2024-07-26 05:18:26.898379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:07.891 [2024-07-26 05:18:26.898388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:07.891 [2024-07-26 05:18:26.898398] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:07.892 [2024-07-26 05:18:26.898407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:07.892 [2024-07-26 05:18:26.898416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:07.892 [2024-07-26 05:18:26.898425] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:07.892 [2024-07-26 05:18:26.898434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:07.892 [2024-07-26 05:18:26.898454] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:07.892 [2024-07-26 05:18:26.898463] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:07.892 [2024-07-26 05:18:26.898471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.892 [2024-07-26 05:18:26.898480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:07.892 [2024-07-26 05:18:26.898489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:07.892 [2024-07-26 05:18:26.898497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:07.892 [2024-07-26 05:18:26.898506] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:07.892 [2024-07-26 05:18:26.898515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:07.892 [2024-07-26 05:18:26.898525] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:07.892 [2024-07-26 05:18:26.898539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:07.892 [2024-07-26 05:18:26.898548] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:07.892 [2024-07-26 05:18:26.898558] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:07.892 [2024-07-26 05:18:26.898566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:07.892 [2024-07-26 05:18:26.898576] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:07.892 [2024-07-26 05:18:26.898584] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:07.892 [2024-07-26 05:18:26.898593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:07.892 [2024-07-26 05:18:26.898603] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:07.892 [2024-07-26 05:18:26.898615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.892 [2024-07-26 05:18:26.898626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:07.892 [2024-07-26 05:18:26.898636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:07.892 [2024-07-26 05:18:26.898646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:07.892 [2024-07-26 05:18:26.898656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:07.892 [2024-07-26 05:18:26.898666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:07.892 [2024-07-26 05:18:26.898676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:07.892 [2024-07-26 05:18:26.898685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:07.892 [2024-07-26 05:18:26.898695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:07.892 [2024-07-26 05:18:26.898705] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:07.892 [2024-07-26 05:18:26.898715] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:07.892 [2024-07-26 05:18:26.898725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:07.892 [2024-07-26 05:18:26.898735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:07.892 [2024-07-26 05:18:26.898745] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:07.892 [2024-07-26 05:18:26.898755] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:07.892 [2024-07-26 05:18:26.898765] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:07.892 [2024-07-26 05:18:26.898776] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:07.892 [2024-07-26 05:18:26.898786] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:07.892 [2024-07-26 05:18:26.898796] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:07.892 [2024-07-26 05:18:26.898806] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:07.892 [2024-07-26 05:18:26.898817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.898826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:07.892 [2024-07-26 05:18:26.898836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:21:07.892 [2024-07-26 05:18:26.898847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.923977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.924137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.892 [2024-07-26 05:18:26.924280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.087 ms 00:21:07.892 [2024-07-26 05:18:26.924319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.924419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.924516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.892 [2024-07-26 05:18:26.924553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:07.892 [2024-07-26 05:18:26.924582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.986382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.986581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.892 [2024-07-26 05:18:26.986673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.687 ms 00:21:07.892 [2024-07-26 05:18:26.986714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.986769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.986800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.892 [2024-07-26 05:18:26.986830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.892 [2024-07-26 05:18:26.986858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.987345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.987474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.892 [2024-07-26 05:18:26.987558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:21:07.892 [2024-07-26 05:18:26.987594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.892 [2024-07-26 05:18:26.987734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.892 [2024-07-26 05:18:26.987809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.892 [2024-07-26 05:18:26.987874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:07.892 [2024-07-26 05:18:26.987903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.010874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.011013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:08.152 [2024-07-26 05:18:27.011137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.929 ms 00:21:08.152 [2024-07-26 05:18:27.011175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.031043] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:08.152 [2024-07-26 05:18:27.031218] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:08.152 [2024-07-26 05:18:27.031398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.031432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:08.152 [2024-07-26 05:18:27.031463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.090 ms 00:21:08.152 [2024-07-26 05:18:27.031492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.062869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.063018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:08.152 [2024-07-26 05:18:27.063097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.277 ms 00:21:08.152 [2024-07-26 05:18:27.063133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.082440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.082585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:08.152 [2024-07-26 05:18:27.082655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.218 ms 00:21:08.152 [2024-07-26 05:18:27.082690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.101804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.101932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:08.152 [2024-07-26 05:18:27.102003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.059 ms 00:21:08.152 [2024-07-26 05:18:27.102038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.102608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.102728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:08.152 [2024-07-26 05:18:27.102804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:21:08.152 [2024-07-26 05:18:27.102838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.197680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.197866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:08.152 [2024-07-26 05:18:27.197943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.796 ms 00:21:08.152 [2024-07-26 05:18:27.197960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.210803] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:08.152 [2024-07-26 05:18:27.214021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.214051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:08.152 [2024-07-26 05:18:27.214065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.011 ms 00:21:08.152 [2024-07-26 05:18:27.214076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.214172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.214187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:08.152 [2024-07-26 05:18:27.214199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:08.152 [2024-07-26 05:18:27.214221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.214293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.214305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:08.152 [2024-07-26 05:18:27.214316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:08.152 [2024-07-26 05:18:27.214326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.216374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.216402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:08.152 [2024-07-26 05:18:27.216416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:21:08.152 [2024-07-26 05:18:27.216436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.216459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.216469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:08.152 [2024-07-26 05:18:27.216479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:08.152 [2024-07-26 05:18:27.216493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.216529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:08.152 [2024-07-26 05:18:27.216540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.216550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:08.152 [2024-07-26 05:18:27.216559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:08.152 [2024-07-26 05:18:27.216571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.152 [2024-07-26 05:18:27.254336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.152 [2024-07-26 05:18:27.254472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:08.152 [2024-07-26 05:18:27.254605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.744 ms 00:21:08.152 [2024-07-26 05:18:27.254643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.153 [2024-07-26 05:18:27.254731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.153 [2024-07-26 05:18:27.254775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:08.153 [2024-07-26 05:18:27.254806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:08.153 [2024-07-26 05:18:27.254885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.153 [2024-07-26 05:18:27.256197] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.486 ms, result 0 00:21:39.862  Copying: 32/1024 [MB] (32 MBps) Copying: 65/1024 [MB] (32 MBps) Copying: 98/1024 [MB] (33 MBps) Copying: 129/1024 [MB] (31 MBps) Copying: 163/1024 [MB] (33 MBps) Copying: 197/1024 [MB] (34 MBps) Copying: 231/1024 [MB] (34 MBps) Copying: 265/1024 [MB] (33 MBps) Copying: 297/1024 [MB] (32 MBps) Copying: 329/1024 [MB] (32 MBps) Copying: 362/1024 [MB] (33 MBps) Copying: 396/1024 [MB] (33 MBps) Copying: 428/1024 [MB] (32 MBps) Copying: 462/1024 [MB] (33 MBps) Copying: 496/1024 [MB] (34 MBps) Copying: 530/1024 [MB] (33 MBps) Copying: 563/1024 [MB] (32 MBps) Copying: 596/1024 [MB] (33 MBps) Copying: 630/1024 [MB] (34 MBps) Copying: 663/1024 [MB] (32 MBps) Copying: 697/1024 [MB] (33 MBps) Copying: 730/1024 [MB] (33 MBps) Copying: 763/1024 [MB] (32 MBps) Copying: 796/1024 [MB] (33 MBps) Copying: 829/1024 [MB] (32 MBps) Copying: 863/1024 [MB] (33 MBps) Copying: 897/1024 [MB] (33 MBps) Copying: 931/1024 [MB] (33 MBps) Copying: 964/1024 [MB] (33 MBps) Copying: 998/1024 [MB] (33 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-07-26 05:18:58.772988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.773055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:39.862 [2024-07-26 05:18:58.773071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:39.862 [2024-07-26 05:18:58.773083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.773107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:39.862 [2024-07-26 05:18:58.777363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.777415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:39.862 [2024-07-26 05:18:58.777429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:21:39.862 [2024-07-26 05:18:58.777445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.777671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.777693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:39.862 [2024-07-26 05:18:58.777704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:21:39.862 [2024-07-26 05:18:58.777714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.780452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.780473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:39.862 [2024-07-26 05:18:58.780484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:21:39.862 [2024-07-26 05:18:58.780494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.785649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.785682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:39.862 [2024-07-26 05:18:58.785694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:21:39.862 [2024-07-26 05:18:58.785704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.826572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.826622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:39.862 [2024-07-26 05:18:58.826637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.793 ms 00:21:39.862 [2024-07-26 05:18:58.826648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.848868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.848914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:39.862 [2024-07-26 05:18:58.848930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.169 ms 00:21:39.862 [2024-07-26 05:18:58.848940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.849085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.849106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:39.862 [2024-07-26 05:18:58.849117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:39.862 [2024-07-26 05:18:58.849127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.887752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.887791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:39.862 [2024-07-26 05:18:58.887805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.608 ms 00:21:39.862 [2024-07-26 05:18:58.887814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.925787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.925823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:39.862 [2024-07-26 05:18:58.925837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.936 ms 00:21:39.862 [2024-07-26 05:18:58.925846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.862 [2024-07-26 05:18:58.962579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.862 [2024-07-26 05:18:58.962613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:39.862 [2024-07-26 05:18:58.962625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.696 ms 00:21:39.862 [2024-07-26 05:18:58.962634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.120 [2024-07-26 05:18:58.997820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.120 [2024-07-26 05:18:58.997856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:40.120 [2024-07-26 05:18:58.997868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.103 ms 00:21:40.120 [2024-07-26 05:18:58.997894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.120 [2024-07-26 05:18:58.997930] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:40.120 [2024-07-26 05:18:58.997946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.997958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.997969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.997980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.997990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:40.120 [2024-07-26 05:18:58.998349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.998992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.999003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:40.121 [2024-07-26 05:18:58.999020] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:40.121 [2024-07-26 05:18:58.999031] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:21:40.121 [2024-07-26 05:18:58.999047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:40.121 [2024-07-26 05:18:58.999057] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:40.121 [2024-07-26 05:18:58.999066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:40.121 [2024-07-26 05:18:58.999076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:40.121 [2024-07-26 05:18:58.999086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:40.121 [2024-07-26 05:18:58.999095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:40.121 [2024-07-26 05:18:58.999105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:40.121 [2024-07-26 05:18:58.999114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:40.121 [2024-07-26 05:18:58.999123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:40.121 [2024-07-26 05:18:58.999133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.121 [2024-07-26 05:18:58.999148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:40.121 [2024-07-26 05:18:58.999159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:21:40.121 [2024-07-26 05:18:58.999178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.017884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.121 [2024-07-26 05:18:59.017918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:40.121 [2024-07-26 05:18:59.017930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:21:40.121 [2024-07-26 05:18:59.017940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.018189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.121 [2024-07-26 05:18:59.018200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:40.121 [2024-07-26 05:18:59.018228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:21:40.121 [2024-07-26 05:18:59.018243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.071054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.121 [2024-07-26 05:18:59.071087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.121 [2024-07-26 05:18:59.071100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.121 [2024-07-26 05:18:59.071126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.071179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.121 [2024-07-26 05:18:59.071190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.121 [2024-07-26 05:18:59.071200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.121 [2024-07-26 05:18:59.071214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.071307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.121 [2024-07-26 05:18:59.071321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.121 [2024-07-26 05:18:59.071331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.121 [2024-07-26 05:18:59.071341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.071358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.121 [2024-07-26 05:18:59.071368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.121 [2024-07-26 05:18:59.071378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.121 [2024-07-26 05:18:59.071387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.121 [2024-07-26 05:18:59.183432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.121 [2024-07-26 05:18:59.183490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.121 [2024-07-26 05:18:59.183504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.121 [2024-07-26 05:18:59.183514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.227593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.227636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.378 [2024-07-26 05:18:59.227650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.227676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.227758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.227769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.378 [2024-07-26 05:18:59.227780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.227790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.227832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.227843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.378 [2024-07-26 05:18:59.227853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.227862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.228004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.228019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.378 [2024-07-26 05:18:59.228029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.228038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.228085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.228099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:40.378 [2024-07-26 05:18:59.228110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.228120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.228177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.228193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.378 [2024-07-26 05:18:59.228203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.228213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.228280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:40.378 [2024-07-26 05:18:59.228292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.378 [2024-07-26 05:18:59.228318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:40.378 [2024-07-26 05:18:59.228328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.378 [2024-07-26 05:18:59.228448] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 455.427 ms, result 0 00:21:41.754 00:21:41.754 00:21:41.754 05:19:00 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:43.129 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:43.129 05:19:02 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:43.386 [2024-07-26 05:19:02.288705] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:43.386 [2024-07-26 05:19:02.288838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75111 ] 00:21:43.386 [2024-07-26 05:19:02.451369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.644 [2024-07-26 05:19:02.706885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.211 [2024-07-26 05:19:03.103774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:44.211 [2024-07-26 05:19:03.103833] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:44.211 [2024-07-26 05:19:03.258725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.258778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:44.211 [2024-07-26 05:19:03.258795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:44.211 [2024-07-26 05:19:03.258806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.258855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.258868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:44.211 [2024-07-26 05:19:03.258878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:44.211 [2024-07-26 05:19:03.258888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.258909] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:44.211 [2024-07-26 05:19:03.260131] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:44.211 [2024-07-26 05:19:03.260163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.260174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:44.211 [2024-07-26 05:19:03.260185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:21:44.211 [2024-07-26 05:19:03.260195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.261628] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:44.211 [2024-07-26 05:19:03.281342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.281376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:44.211 [2024-07-26 05:19:03.281410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.715 ms 00:21:44.211 [2024-07-26 05:19:03.281420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.281480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.281492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:44.211 [2024-07-26 05:19:03.281503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:44.211 [2024-07-26 05:19:03.281513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.288222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.288247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:44.211 [2024-07-26 05:19:03.288258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.641 ms 00:21:44.211 [2024-07-26 05:19:03.288268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.288350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.288363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:44.211 [2024-07-26 05:19:03.288373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:44.211 [2024-07-26 05:19:03.288382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.288419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.288433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:44.211 [2024-07-26 05:19:03.288444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:44.211 [2024-07-26 05:19:03.288453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.288480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:44.211 [2024-07-26 05:19:03.294012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.294045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:44.211 [2024-07-26 05:19:03.294058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.541 ms 00:21:44.211 [2024-07-26 05:19:03.294069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.294102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.211 [2024-07-26 05:19:03.294113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:44.211 [2024-07-26 05:19:03.294124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:44.211 [2024-07-26 05:19:03.294134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.211 [2024-07-26 05:19:03.294183] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:44.211 [2024-07-26 05:19:03.294363] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:44.211 [2024-07-26 05:19:03.294449] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:44.211 [2024-07-26 05:19:03.294504] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:44.211 [2024-07-26 05:19:03.294592] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:44.212 [2024-07-26 05:19:03.294606] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:44.212 [2024-07-26 05:19:03.294619] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:44.212 [2024-07-26 05:19:03.294634] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:44.212 [2024-07-26 05:19:03.294646] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:44.212 [2024-07-26 05:19:03.294662] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:44.212 [2024-07-26 05:19:03.294672] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:44.212 [2024-07-26 05:19:03.294682] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:44.212 [2024-07-26 05:19:03.294693] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:44.212 [2024-07-26 05:19:03.294704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.212 [2024-07-26 05:19:03.294714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:44.212 [2024-07-26 05:19:03.294725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:21:44.212 [2024-07-26 05:19:03.294735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.212 [2024-07-26 05:19:03.294796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.212 [2024-07-26 05:19:03.294808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:44.212 [2024-07-26 05:19:03.294820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:44.212 [2024-07-26 05:19:03.294830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.212 [2024-07-26 05:19:03.294897] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:44.212 [2024-07-26 05:19:03.294910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:44.212 [2024-07-26 05:19:03.294920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.212 [2024-07-26 05:19:03.294931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.212 [2024-07-26 05:19:03.294941] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:44.212 [2024-07-26 05:19:03.294950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:44.212 [2024-07-26 05:19:03.294960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:44.212 [2024-07-26 05:19:03.294970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:44.212 [2024-07-26 05:19:03.294979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:44.212 [2024-07-26 05:19:03.294989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.212 [2024-07-26 05:19:03.294998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:44.212 [2024-07-26 05:19:03.295009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:44.212 [2024-07-26 05:19:03.295018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:44.212 [2024-07-26 05:19:03.295027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:44.212 [2024-07-26 05:19:03.295037] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:44.212 [2024-07-26 05:19:03.295046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:44.212 [2024-07-26 05:19:03.295064] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:44.212 [2024-07-26 05:19:03.295074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:44.212 [2024-07-26 05:19:03.295093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:44.212 [2024-07-26 05:19:03.295113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:44.212 [2024-07-26 05:19:03.295133] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:44.212 [2024-07-26 05:19:03.295160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:44.212 [2024-07-26 05:19:03.295188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:44.212 [2024-07-26 05:19:03.295215] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:44.212 [2024-07-26 05:19:03.295255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.212 [2024-07-26 05:19:03.295273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:44.212 [2024-07-26 05:19:03.295282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:44.212 [2024-07-26 05:19:03.295291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:44.212 [2024-07-26 05:19:03.295300] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:44.212 [2024-07-26 05:19:03.295310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:44.212 [2024-07-26 05:19:03.295320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:44.212 [2024-07-26 05:19:03.295345] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:44.212 [2024-07-26 05:19:03.295355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:44.212 [2024-07-26 05:19:03.295375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:44.212 [2024-07-26 05:19:03.295384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:44.212 [2024-07-26 05:19:03.295393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:44.212 [2024-07-26 05:19:03.295403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:44.212 [2024-07-26 05:19:03.295413] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:44.212 [2024-07-26 05:19:03.295424] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.212 [2024-07-26 05:19:03.295435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:44.212 [2024-07-26 05:19:03.295446] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:44.212 [2024-07-26 05:19:03.295456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:44.212 [2024-07-26 05:19:03.295467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:44.212 [2024-07-26 05:19:03.295477] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:44.212 [2024-07-26 05:19:03.295487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:44.212 [2024-07-26 05:19:03.295497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:44.212 [2024-07-26 05:19:03.295507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:44.212 [2024-07-26 05:19:03.295517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:44.212 [2024-07-26 05:19:03.295527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:44.212 [2024-07-26 05:19:03.295537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:44.212 [2024-07-26 05:19:03.295548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:44.212 [2024-07-26 05:19:03.295558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:44.212 [2024-07-26 05:19:03.295568] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:44.212 [2024-07-26 05:19:03.295579] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:44.212 [2024-07-26 05:19:03.295589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:44.212 [2024-07-26 05:19:03.295599] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:44.212 [2024-07-26 05:19:03.295610] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:44.212 [2024-07-26 05:19:03.295620] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:44.212 [2024-07-26 05:19:03.295630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.212 [2024-07-26 05:19:03.295640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:44.212 [2024-07-26 05:19:03.295650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:21:44.212 [2024-07-26 05:19:03.295659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.319956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.320101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:44.476 [2024-07-26 05:19:03.320279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.252 ms 00:21:44.476 [2024-07-26 05:19:03.320442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.320552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.320659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:44.476 [2024-07-26 05:19:03.320773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:44.476 [2024-07-26 05:19:03.320811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.382243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.382397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:44.476 [2024-07-26 05:19:03.382524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.354 ms 00:21:44.476 [2024-07-26 05:19:03.382570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.382636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.382668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:44.476 [2024-07-26 05:19:03.382698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:44.476 [2024-07-26 05:19:03.382774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.383289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.383395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:44.476 [2024-07-26 05:19:03.383469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:44.476 [2024-07-26 05:19:03.383505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.383646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.383682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:44.476 [2024-07-26 05:19:03.383752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:44.476 [2024-07-26 05:19:03.383785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.405932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.406072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:44.476 [2024-07-26 05:19:03.406237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.101 ms 00:21:44.476 [2024-07-26 05:19:03.406279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.425022] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:44.476 [2024-07-26 05:19:03.425199] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:44.476 [2024-07-26 05:19:03.425314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.425359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:44.476 [2024-07-26 05:19:03.425390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:21:44.476 [2024-07-26 05:19:03.425419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.476 [2024-07-26 05:19:03.454537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.476 [2024-07-26 05:19:03.454694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:44.476 [2024-07-26 05:19:03.454797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.062 ms 00:21:44.477 [2024-07-26 05:19:03.454835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.477 [2024-07-26 05:19:03.473714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.477 [2024-07-26 05:19:03.473865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:44.477 [2024-07-26 05:19:03.473964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:21:44.477 [2024-07-26 05:19:03.474002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.477 [2024-07-26 05:19:03.492671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.477 [2024-07-26 05:19:03.492806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:44.477 [2024-07-26 05:19:03.492916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:21:44.477 [2024-07-26 05:19:03.492951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.477 [2024-07-26 05:19:03.493502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.477 [2024-07-26 05:19:03.493556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:44.477 [2024-07-26 05:19:03.493570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:21:44.477 [2024-07-26 05:19:03.493580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.584532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.584588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:44.737 [2024-07-26 05:19:03.584621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.928 ms 00:21:44.737 [2024-07-26 05:19:03.584631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.596571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:44.737 [2024-07-26 05:19:03.599370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.599398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:44.737 [2024-07-26 05:19:03.599411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.687 ms 00:21:44.737 [2024-07-26 05:19:03.599421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.599503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.599519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:44.737 [2024-07-26 05:19:03.599530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:44.737 [2024-07-26 05:19:03.599539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.599603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.599614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:44.737 [2024-07-26 05:19:03.599624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:44.737 [2024-07-26 05:19:03.599633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.601665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.601694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:44.737 [2024-07-26 05:19:03.601708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:21:44.737 [2024-07-26 05:19:03.601719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.601748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.601759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:44.737 [2024-07-26 05:19:03.601769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:44.737 [2024-07-26 05:19:03.601784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.601820] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:44.737 [2024-07-26 05:19:03.601832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.601843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:44.737 [2024-07-26 05:19:03.601853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:44.737 [2024-07-26 05:19:03.601865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.638611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.638647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:44.737 [2024-07-26 05:19:03.638660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.726 ms 00:21:44.737 [2024-07-26 05:19:03.638686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.638754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.737 [2024-07-26 05:19:03.638772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:44.737 [2024-07-26 05:19:03.638783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:44.737 [2024-07-26 05:19:03.638793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.737 [2024-07-26 05:19:03.639896] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.739 ms, result 0 00:22:18.118  Copying: 29/1024 [MB] (29 MBps) Copying: 60/1024 [MB] (31 MBps) Copying: 92/1024 [MB] (31 MBps) Copying: 123/1024 [MB] (31 MBps) Copying: 155/1024 [MB] (32 MBps) Copying: 187/1024 [MB] (31 MBps) Copying: 218/1024 [MB] (30 MBps) Copying: 249/1024 [MB] (31 MBps) Copying: 280/1024 [MB] (31 MBps) Copying: 312/1024 [MB] (31 MBps) Copying: 343/1024 [MB] (31 MBps) Copying: 374/1024 [MB] (30 MBps) Copying: 406/1024 [MB] (31 MBps) Copying: 437/1024 [MB] (31 MBps) Copying: 467/1024 [MB] (30 MBps) Copying: 499/1024 [MB] (32 MBps) Copying: 531/1024 [MB] (31 MBps) Copying: 561/1024 [MB] (29 MBps) Copying: 592/1024 [MB] (31 MBps) Copying: 623/1024 [MB] (30 MBps) Copying: 655/1024 [MB] (31 MBps) Copying: 687/1024 [MB] (32 MBps) Copying: 719/1024 [MB] (32 MBps) Copying: 750/1024 [MB] (31 MBps) Copying: 782/1024 [MB] (31 MBps) Copying: 814/1024 [MB] (31 MBps) Copying: 846/1024 [MB] (32 MBps) Copying: 879/1024 [MB] (32 MBps) Copying: 911/1024 [MB] (31 MBps) Copying: 943/1024 [MB] (32 MBps) Copying: 976/1024 [MB] (32 MBps) Copying: 1008/1024 [MB] (32 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-26 05:19:36.955047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:36.955100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.118 [2024-07-26 05:19:36.955118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:18.118 [2024-07-26 05:19:36.955139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:36.957119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.118 [2024-07-26 05:19:36.961616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:36.961649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.118 [2024-07-26 05:19:36.961664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:22:18.118 [2024-07-26 05:19:36.961676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:36.971881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:36.971919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.118 [2024-07-26 05:19:36.971933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.811 ms 00:22:18.118 [2024-07-26 05:19:36.971943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:36.992654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:36.992696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.118 [2024-07-26 05:19:36.992712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.685 ms 00:22:18.118 [2024-07-26 05:19:36.992725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:36.998125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:36.998157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:18.118 [2024-07-26 05:19:36.998169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:22:18.118 [2024-07-26 05:19:36.998180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:37.038197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:37.038244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.118 [2024-07-26 05:19:37.038259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.952 ms 00:22:18.118 [2024-07-26 05:19:37.038269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:37.060434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:37.060480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.118 [2024-07-26 05:19:37.060494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.125 ms 00:22:18.118 [2024-07-26 05:19:37.060505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:37.156969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:37.157017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:18.118 [2024-07-26 05:19:37.157033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.420 ms 00:22:18.118 [2024-07-26 05:19:37.157045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.118 [2024-07-26 05:19:37.196208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.118 [2024-07-26 05:19:37.196252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:18.118 [2024-07-26 05:19:37.196266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.145 ms 00:22:18.118 [2024-07-26 05:19:37.196291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.379 [2024-07-26 05:19:37.234932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.379 [2024-07-26 05:19:37.234968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:18.379 [2024-07-26 05:19:37.234982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.603 ms 00:22:18.379 [2024-07-26 05:19:37.235008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.379 [2024-07-26 05:19:37.272766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.379 [2024-07-26 05:19:37.272802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:18.379 [2024-07-26 05:19:37.272815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.720 ms 00:22:18.379 [2024-07-26 05:19:37.272825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.379 [2024-07-26 05:19:37.311176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.379 [2024-07-26 05:19:37.311220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:18.379 [2024-07-26 05:19:37.311235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.263 ms 00:22:18.379 [2024-07-26 05:19:37.311245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.379 [2024-07-26 05:19:37.311282] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:18.379 [2024-07-26 05:19:37.311299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113408 / 261120 wr_cnt: 1 state: open 00:22:18.379 [2024-07-26 05:19:37.311312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:18.379 [2024-07-26 05:19:37.311750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.311996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:18.380 [2024-07-26 05:19:37.312399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:18.380 [2024-07-26 05:19:37.312410] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:22:18.380 [2024-07-26 05:19:37.312421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113408 00:22:18.380 [2024-07-26 05:19:37.312430] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114368 00:22:18.380 [2024-07-26 05:19:37.312439] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113408 00:22:18.380 [2024-07-26 05:19:37.312450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:22:18.380 [2024-07-26 05:19:37.312460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:18.380 [2024-07-26 05:19:37.312474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:18.380 [2024-07-26 05:19:37.312484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:18.380 [2024-07-26 05:19:37.312493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:18.380 [2024-07-26 05:19:37.312502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:18.380 [2024-07-26 05:19:37.312511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.380 [2024-07-26 05:19:37.312521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:18.380 [2024-07-26 05:19:37.312531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:22:18.380 [2024-07-26 05:19:37.312551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.380 [2024-07-26 05:19:37.332507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.380 [2024-07-26 05:19:37.332540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:18.380 [2024-07-26 05:19:37.332553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.910 ms 00:22:18.380 [2024-07-26 05:19:37.332585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.380 [2024-07-26 05:19:37.332842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.380 [2024-07-26 05:19:37.332854] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:18.380 [2024-07-26 05:19:37.332865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:22:18.380 [2024-07-26 05:19:37.332875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.380 [2024-07-26 05:19:37.388825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.380 [2024-07-26 05:19:37.388862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.380 [2024-07-26 05:19:37.388897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.380 [2024-07-26 05:19:37.388908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.380 [2024-07-26 05:19:37.388965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.380 [2024-07-26 05:19:37.388976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.381 [2024-07-26 05:19:37.388986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.381 [2024-07-26 05:19:37.388995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.381 [2024-07-26 05:19:37.389077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.381 [2024-07-26 05:19:37.389090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.381 [2024-07-26 05:19:37.389101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.381 [2024-07-26 05:19:37.389116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.381 [2024-07-26 05:19:37.389133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.381 [2024-07-26 05:19:37.389143] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.381 [2024-07-26 05:19:37.389153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.381 [2024-07-26 05:19:37.389163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.508695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.508755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.640 [2024-07-26 05:19:37.508777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.508788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.556459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.556512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.640 [2024-07-26 05:19:37.556527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.556553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.556640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.556652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.640 [2024-07-26 05:19:37.556663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.556673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.556724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.556736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.640 [2024-07-26 05:19:37.556746] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.556756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.556870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.556883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.640 [2024-07-26 05:19:37.556893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.556903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.556942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.556954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.640 [2024-07-26 05:19:37.556964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.556974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.557011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.557022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.640 [2024-07-26 05:19:37.557032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.640 [2024-07-26 05:19:37.557042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.640 [2024-07-26 05:19:37.557089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.640 [2024-07-26 05:19:37.557101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.641 [2024-07-26 05:19:37.557111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.641 [2024-07-26 05:19:37.557121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.641 [2024-07-26 05:19:37.557489] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 602.158 ms, result 0 00:22:20.546 00:22:20.546 00:22:20.546 05:19:39 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:20.546 [2024-07-26 05:19:39.314158] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:20.546 [2024-07-26 05:19:39.314581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75485 ] 00:22:20.546 [2024-07-26 05:19:39.489305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.805 [2024-07-26 05:19:39.718147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.064 [2024-07-26 05:19:40.122080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.064 [2024-07-26 05:19:40.122149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.323 [2024-07-26 05:19:40.277436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.323 [2024-07-26 05:19:40.277489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.323 [2024-07-26 05:19:40.277506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:21.323 [2024-07-26 05:19:40.277516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.277567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.277579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.324 [2024-07-26 05:19:40.277589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:21.324 [2024-07-26 05:19:40.277599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.277621] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.324 [2024-07-26 05:19:40.278846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.324 [2024-07-26 05:19:40.278879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.278890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.324 [2024-07-26 05:19:40.278900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:22:21.324 [2024-07-26 05:19:40.278910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.280341] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:21.324 [2024-07-26 05:19:40.299729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.299766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:21.324 [2024-07-26 05:19:40.299786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.389 ms 00:22:21.324 [2024-07-26 05:19:40.299796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.299859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.299871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:21.324 [2024-07-26 05:19:40.299882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:21.324 [2024-07-26 05:19:40.299892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.306878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.307006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.324 [2024-07-26 05:19:40.307155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:22:21.324 [2024-07-26 05:19:40.307193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.307322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.307360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.324 [2024-07-26 05:19:40.307451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:21.324 [2024-07-26 05:19:40.307486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.307553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.307572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.324 [2024-07-26 05:19:40.307584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:21.324 [2024-07-26 05:19:40.307595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.307624] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.324 [2024-07-26 05:19:40.313595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.313718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.324 [2024-07-26 05:19:40.313798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.980 ms 00:22:21.324 [2024-07-26 05:19:40.313834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.313891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.313923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.324 [2024-07-26 05:19:40.313953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:21.324 [2024-07-26 05:19:40.313982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.314110] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:21.324 [2024-07-26 05:19:40.314170] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:21.324 [2024-07-26 05:19:40.314261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:21.324 [2024-07-26 05:19:40.314436] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:21.324 [2024-07-26 05:19:40.314539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:21.324 [2024-07-26 05:19:40.314588] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.324 [2024-07-26 05:19:40.314603] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:21.324 [2024-07-26 05:19:40.314616] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.324 [2024-07-26 05:19:40.314628] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.324 [2024-07-26 05:19:40.314643] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:21.324 [2024-07-26 05:19:40.314653] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.324 [2024-07-26 05:19:40.314662] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:21.324 [2024-07-26 05:19:40.314672] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:21.324 [2024-07-26 05:19:40.314684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.314694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.324 [2024-07-26 05:19:40.314705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:22:21.324 [2024-07-26 05:19:40.314715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.314779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.324 [2024-07-26 05:19:40.314791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.324 [2024-07-26 05:19:40.314804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:21.324 [2024-07-26 05:19:40.314814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.324 [2024-07-26 05:19:40.314879] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.324 [2024-07-26 05:19:40.314891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.324 [2024-07-26 05:19:40.314902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.324 [2024-07-26 05:19:40.314912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.324 [2024-07-26 05:19:40.314923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.324 [2024-07-26 05:19:40.314932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.324 [2024-07-26 05:19:40.314942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:21.324 [2024-07-26 05:19:40.314951] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.324 [2024-07-26 05:19:40.314961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:21.324 [2024-07-26 05:19:40.314970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.324 [2024-07-26 05:19:40.314979] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.324 [2024-07-26 05:19:40.314989] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:21.324 [2024-07-26 05:19:40.314998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.324 [2024-07-26 05:19:40.315007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.324 [2024-07-26 05:19:40.315016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:21.324 [2024-07-26 05:19:40.315026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.324 [2024-07-26 05:19:40.315036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.324 [2024-07-26 05:19:40.315045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:21.324 [2024-07-26 05:19:40.315055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.324 [2024-07-26 05:19:40.315064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:21.324 [2024-07-26 05:19:40.315074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:21.324 [2024-07-26 05:19:40.315094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:21.324 [2024-07-26 05:19:40.315104] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.324 [2024-07-26 05:19:40.315114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:21.324 [2024-07-26 05:19:40.315123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.324 [2024-07-26 05:19:40.315132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.324 [2024-07-26 05:19:40.315142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:21.324 [2024-07-26 05:19:40.315151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.324 [2024-07-26 05:19:40.315160] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.324 [2024-07-26 05:19:40.315169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:21.325 [2024-07-26 05:19:40.315178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.325 [2024-07-26 05:19:40.315187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.325 [2024-07-26 05:19:40.315197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:21.325 [2024-07-26 05:19:40.315217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:21.325 [2024-07-26 05:19:40.315227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.325 [2024-07-26 05:19:40.315236] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:21.325 [2024-07-26 05:19:40.315245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.325 [2024-07-26 05:19:40.315254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.325 [2024-07-26 05:19:40.315263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:21.325 [2024-07-26 05:19:40.315272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.325 [2024-07-26 05:19:40.315281] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.325 [2024-07-26 05:19:40.315291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.325 [2024-07-26 05:19:40.315301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.325 [2024-07-26 05:19:40.315314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.325 [2024-07-26 05:19:40.315324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.325 [2024-07-26 05:19:40.315334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.325 [2024-07-26 05:19:40.315343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.325 [2024-07-26 05:19:40.315354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.325 [2024-07-26 05:19:40.315363] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.325 [2024-07-26 05:19:40.315373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.325 [2024-07-26 05:19:40.315383] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.325 [2024-07-26 05:19:40.315395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.325 [2024-07-26 05:19:40.315407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:21.325 [2024-07-26 05:19:40.315418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:21.325 [2024-07-26 05:19:40.315429] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:21.325 [2024-07-26 05:19:40.315440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:21.325 [2024-07-26 05:19:40.315451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:21.325 [2024-07-26 05:19:40.315461] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:21.325 [2024-07-26 05:19:40.315472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:21.325 [2024-07-26 05:19:40.315482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:21.325 [2024-07-26 05:19:40.315493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:21.325 [2024-07-26 05:19:40.315503] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:21.325 [2024-07-26 05:19:40.315513] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:21.325 [2024-07-26 05:19:40.315524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:21.325 [2024-07-26 05:19:40.315536] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:21.325 [2024-07-26 05:19:40.315546] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.325 [2024-07-26 05:19:40.315557] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.325 [2024-07-26 05:19:40.315569] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.325 [2024-07-26 05:19:40.315579] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.325 [2024-07-26 05:19:40.315589] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.325 [2024-07-26 05:19:40.315600] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.325 [2024-07-26 05:19:40.315610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.315621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.325 [2024-07-26 05:19:40.315631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:22:21.325 [2024-07-26 05:19:40.315641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.340168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.340322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.325 [2024-07-26 05:19:40.340406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:22:21.325 [2024-07-26 05:19:40.340442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.340540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.340578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.325 [2024-07-26 05:19:40.340608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:21.325 [2024-07-26 05:19:40.340636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.404649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.404812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.325 [2024-07-26 05:19:40.404891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.884 ms 00:22:21.325 [2024-07-26 05:19:40.404932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.404988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.405021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.325 [2024-07-26 05:19:40.405050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:21.325 [2024-07-26 05:19:40.405136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.405659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.405705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.325 [2024-07-26 05:19:40.405790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:22:21.325 [2024-07-26 05:19:40.405825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.405963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.406036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.325 [2024-07-26 05:19:40.406071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:21.325 [2024-07-26 05:19:40.406100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.325 [2024-07-26 05:19:40.428857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.325 [2024-07-26 05:19:40.428993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.325 [2024-07-26 05:19:40.429073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.666 ms 00:22:21.325 [2024-07-26 05:19:40.429109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.449583] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:21.585 [2024-07-26 05:19:40.449740] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:21.585 [2024-07-26 05:19:40.449836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.449868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:21.585 [2024-07-26 05:19:40.449899] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.582 ms 00:22:21.585 [2024-07-26 05:19:40.449927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.481247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.481418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:21.585 [2024-07-26 05:19:40.481440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.262 ms 00:22:21.585 [2024-07-26 05:19:40.481451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.500455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.500489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:21.585 [2024-07-26 05:19:40.500503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.962 ms 00:22:21.585 [2024-07-26 05:19:40.500513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.519569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.519603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:21.585 [2024-07-26 05:19:40.519615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.019 ms 00:22:21.585 [2024-07-26 05:19:40.519640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.520155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.520172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:21.585 [2024-07-26 05:19:40.520183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:22:21.585 [2024-07-26 05:19:40.520193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.612164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.612228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.585 [2024-07-26 05:19:40.612246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.950 ms 00:22:21.585 [2024-07-26 05:19:40.612257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.625249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:21.585 [2024-07-26 05:19:40.628292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.628327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.585 [2024-07-26 05:19:40.628341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.977 ms 00:22:21.585 [2024-07-26 05:19:40.628352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.628437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.628453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.585 [2024-07-26 05:19:40.628465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.585 [2024-07-26 05:19:40.628476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.629859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.629898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.585 [2024-07-26 05:19:40.629910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:22:21.585 [2024-07-26 05:19:40.629919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.631978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.632004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:21.585 [2024-07-26 05:19:40.632019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:22:21.585 [2024-07-26 05:19:40.632028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.632060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.632071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.585 [2024-07-26 05:19:40.632082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:21.585 [2024-07-26 05:19:40.632097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.632132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.585 [2024-07-26 05:19:40.632145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.585 [2024-07-26 05:19:40.632155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.585 [2024-07-26 05:19:40.632165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:21.585 [2024-07-26 05:19:40.632179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.585 [2024-07-26 05:19:40.670621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.586 [2024-07-26 05:19:40.670659] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.586 [2024-07-26 05:19:40.670673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.422 ms 00:22:21.586 [2024-07-26 05:19:40.670684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.586 [2024-07-26 05:19:40.670753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.586 [2024-07-26 05:19:40.670771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.586 [2024-07-26 05:19:40.670782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:21.586 [2024-07-26 05:19:40.670792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.586 [2024-07-26 05:19:40.677425] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.217 ms, result 0 00:22:53.210  Copying: 29/1024 [MB] (29 MBps) Copying: 61/1024 [MB] (32 MBps) Copying: 95/1024 [MB] (33 MBps) Copying: 127/1024 [MB] (32 MBps) Copying: 160/1024 [MB] (32 MBps) Copying: 193/1024 [MB] (33 MBps) Copying: 227/1024 [MB] (33 MBps) Copying: 260/1024 [MB] (33 MBps) Copying: 293/1024 [MB] (32 MBps) Copying: 326/1024 [MB] (32 MBps) Copying: 359/1024 [MB] (32 MBps) Copying: 393/1024 [MB] (33 MBps) Copying: 426/1024 [MB] (33 MBps) Copying: 460/1024 [MB] (33 MBps) Copying: 494/1024 [MB] (33 MBps) Copying: 526/1024 [MB] (32 MBps) Copying: 559/1024 [MB] (32 MBps) Copying: 593/1024 [MB] (33 MBps) Copying: 627/1024 [MB] (34 MBps) Copying: 662/1024 [MB] (34 MBps) Copying: 696/1024 [MB] (34 MBps) Copying: 729/1024 [MB] (33 MBps) Copying: 762/1024 [MB] (33 MBps) Copying: 795/1024 [MB] (33 MBps) Copying: 828/1024 [MB] (32 MBps) Copying: 861/1024 [MB] (32 MBps) Copying: 895/1024 [MB] (33 MBps) Copying: 929/1024 [MB] (34 MBps) Copying: 963/1024 [MB] (33 MBps) Copying: 997/1024 [MB] (33 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-07-26 05:20:12.058938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.059019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.210 [2024-07-26 05:20:12.059039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:53.210 [2024-07-26 05:20:12.059051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.059091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.210 [2024-07-26 05:20:12.063414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.063450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.210 [2024-07-26 05:20:12.063465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:22:53.210 [2024-07-26 05:20:12.063476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.063722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.063741] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.210 [2024-07-26 05:20:12.063764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:53.210 [2024-07-26 05:20:12.063775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.070303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.070350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.210 [2024-07-26 05:20:12.070366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.503 ms 00:22:53.210 [2024-07-26 05:20:12.070379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.076640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.076676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:53.210 [2024-07-26 05:20:12.076689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:22:53.210 [2024-07-26 05:20:12.076700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.119994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.120047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.210 [2024-07-26 05:20:12.120062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.229 ms 00:22:53.210 [2024-07-26 05:20:12.120089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.141163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.141219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.210 [2024-07-26 05:20:12.141251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.027 ms 00:22:53.210 [2024-07-26 05:20:12.141261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.253871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.253918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.210 [2024-07-26 05:20:12.253933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.563 ms 00:22:53.210 [2024-07-26 05:20:12.253944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.210 [2024-07-26 05:20:12.293176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.210 [2024-07-26 05:20:12.293222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:53.210 [2024-07-26 05:20:12.293237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.214 ms 00:22:53.210 [2024-07-26 05:20:12.293247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.489 [2024-07-26 05:20:12.332276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.489 [2024-07-26 05:20:12.332317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:53.489 [2024-07-26 05:20:12.332331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.990 ms 00:22:53.489 [2024-07-26 05:20:12.332341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.489 [2024-07-26 05:20:12.369061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.489 [2024-07-26 05:20:12.369102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.489 [2024-07-26 05:20:12.369117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.681 ms 00:22:53.489 [2024-07-26 05:20:12.369127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.489 [2024-07-26 05:20:12.406263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.489 [2024-07-26 05:20:12.406300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.489 [2024-07-26 05:20:12.406313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.022 ms 00:22:53.489 [2024-07-26 05:20:12.406323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.489 [2024-07-26 05:20:12.406360] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.489 [2024-07-26 05:20:12.406377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 132864 / 261120 wr_cnt: 1 state: open 00:22:53.489 [2024-07-26 05:20:12.406390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.406994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.489 [2024-07-26 05:20:12.407152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.490 [2024-07-26 05:20:12.407465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.490 [2024-07-26 05:20:12.407475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2f06a714-f9d0-4ac9-8874-e98b3d8385dd 00:22:53.490 [2024-07-26 05:20:12.407486] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 132864 00:22:53.490 [2024-07-26 05:20:12.407496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20416 00:22:53.490 [2024-07-26 05:20:12.407506] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19456 00:22:53.490 [2024-07-26 05:20:12.407516] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0493 00:22:53.490 [2024-07-26 05:20:12.407526] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.490 [2024-07-26 05:20:12.407536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.490 [2024-07-26 05:20:12.407551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.490 [2024-07-26 05:20:12.407561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.490 [2024-07-26 05:20:12.407570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.490 [2024-07-26 05:20:12.407580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.490 [2024-07-26 05:20:12.407590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.490 [2024-07-26 05:20:12.407600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:22:53.490 [2024-07-26 05:20:12.407610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.427440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.490 [2024-07-26 05:20:12.427472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.490 [2024-07-26 05:20:12.427485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.775 ms 00:22:53.490 [2024-07-26 05:20:12.427495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.427742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.490 [2024-07-26 05:20:12.427756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.490 [2024-07-26 05:20:12.427767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:22:53.490 [2024-07-26 05:20:12.427776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.481601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.490 [2024-07-26 05:20:12.481638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.490 [2024-07-26 05:20:12.481657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.490 [2024-07-26 05:20:12.481667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.481723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.490 [2024-07-26 05:20:12.481734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.490 [2024-07-26 05:20:12.481744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.490 [2024-07-26 05:20:12.481754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.481826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.490 [2024-07-26 05:20:12.481839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.490 [2024-07-26 05:20:12.481850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.490 [2024-07-26 05:20:12.481864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.490 [2024-07-26 05:20:12.481882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.490 [2024-07-26 05:20:12.481893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.490 [2024-07-26 05:20:12.481903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.490 [2024-07-26 05:20:12.481917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.597148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.597254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.783 [2024-07-26 05:20:12.597285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.597299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.657918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.657994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.783 [2024-07-26 05:20:12.658017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.783 [2024-07-26 05:20:12.658190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658236] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.783 [2024-07-26 05:20:12.658349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.783 [2024-07-26 05:20:12.658561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.783 [2024-07-26 05:20:12.658667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658765] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.783 [2024-07-26 05:20:12.658781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.658862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.783 [2024-07-26 05:20:12.658880] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.783 [2024-07-26 05:20:12.658896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.783 [2024-07-26 05:20:12.658911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.783 [2024-07-26 05:20:12.659073] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.087 ms, result 0 00:22:55.157 00:22:55.157 00:22:55.157 05:20:14 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:57.059 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:57.059 05:20:15 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:57.059 05:20:15 -- ftl/restore.sh@85 -- # restore_kill 00:22:57.059 05:20:15 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:57.059 05:20:16 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:57.059 05:20:16 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:57.059 Process with pid 74118 is not found 00:22:57.059 Remove shared memory files 00:22:57.059 05:20:16 -- ftl/restore.sh@32 -- # killprocess 74118 00:22:57.059 05:20:16 -- common/autotest_common.sh@926 -- # '[' -z 74118 ']' 00:22:57.059 05:20:16 -- common/autotest_common.sh@930 -- # kill -0 74118 00:22:57.059 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74118) - No such process 00:22:57.059 05:20:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74118 is not found' 00:22:57.059 05:20:16 -- ftl/restore.sh@33 -- # remove_shm 00:22:57.059 05:20:16 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:57.059 05:20:16 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:57.059 05:20:16 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:57.059 05:20:16 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:57.059 05:20:16 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:57.059 05:20:16 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:57.059 ************************************ 00:22:57.059 END TEST ftl_restore 00:22:57.059 ************************************ 00:22:57.059 00:22:57.059 real 2m46.118s 00:22:57.059 user 2m33.682s 00:22:57.059 sys 0m14.132s 00:22:57.059 05:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.059 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.059 05:20:16 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:57.059 05:20:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:57.059 05:20:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:57.059 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.059 ************************************ 00:22:57.059 START TEST ftl_dirty_shutdown 00:22:57.059 ************************************ 00:22:57.059 05:20:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:57.319 * Looking for test storage... 00:22:57.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:57.319 05:20:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:57.319 05:20:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.319 05:20:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.319 05:20:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:57.319 05:20:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:57.319 05:20:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:57.319 05:20:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:57.319 05:20:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:57.319 05:20:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.319 05:20:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.319 05:20:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:57.319 05:20:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:57.319 05:20:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:57.319 05:20:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:57.319 05:20:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:57.319 05:20:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:57.319 05:20:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.319 05:20:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.319 05:20:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:57.319 05:20:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:57.319 05:20:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:57.319 05:20:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:57.319 05:20:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:57.319 05:20:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:57.319 05:20:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:57.319 05:20:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:57.319 05:20:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.319 05:20:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@45 -- # svcpid=75918 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 75918 00:22:57.319 05:20:16 -- common/autotest_common.sh@819 -- # '[' -z 75918 ']' 00:22:57.319 05:20:16 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:57.319 05:20:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.319 05:20:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.319 05:20:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.319 05:20:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.319 05:20:16 -- common/autotest_common.sh@10 -- # set +x 00:22:57.319 [2024-07-26 05:20:16.377416] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:57.319 [2024-07-26 05:20:16.377816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75918 ] 00:22:57.578 [2024-07-26 05:20:16.559125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.837 [2024-07-26 05:20:16.794102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:57.837 [2024-07-26 05:20:16.794525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.823 05:20:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.823 05:20:17 -- common/autotest_common.sh@852 -- # return 0 00:22:58.823 05:20:17 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:58.823 05:20:17 -- ftl/common.sh@54 -- # local name=nvme0 00:22:58.823 05:20:17 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:58.823 05:20:17 -- ftl/common.sh@56 -- # local size=103424 00:22:58.823 05:20:17 -- ftl/common.sh@59 -- # local base_bdev 00:22:58.823 05:20:17 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:59.390 05:20:18 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:59.390 05:20:18 -- ftl/common.sh@62 -- # local base_size 00:22:59.390 05:20:18 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:59.390 05:20:18 -- common/autotest_common.sh@1357 -- # local bdev_name=nvme0n1 00:22:59.390 05:20:18 -- common/autotest_common.sh@1358 -- # local bdev_info 00:22:59.390 05:20:18 -- common/autotest_common.sh@1359 -- # local bs 00:22:59.390 05:20:18 -- common/autotest_common.sh@1360 -- # local nb 00:22:59.390 05:20:18 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:59.390 05:20:18 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:22:59.390 { 00:22:59.390 "name": "nvme0n1", 00:22:59.390 "aliases": [ 00:22:59.390 "dc668d42-718b-4ad0-aa04-1972233e4765" 00:22:59.390 ], 00:22:59.390 "product_name": "NVMe disk", 00:22:59.390 "block_size": 4096, 00:22:59.390 "num_blocks": 1310720, 00:22:59.390 "uuid": "dc668d42-718b-4ad0-aa04-1972233e4765", 00:22:59.390 "assigned_rate_limits": { 00:22:59.390 "rw_ios_per_sec": 0, 00:22:59.390 "rw_mbytes_per_sec": 0, 00:22:59.390 "r_mbytes_per_sec": 0, 00:22:59.390 "w_mbytes_per_sec": 0 00:22:59.390 }, 00:22:59.390 "claimed": true, 00:22:59.390 "claim_type": "read_many_write_one", 00:22:59.390 "zoned": false, 00:22:59.390 "supported_io_types": { 00:22:59.390 "read": true, 00:22:59.390 "write": true, 00:22:59.390 "unmap": true, 00:22:59.390 "write_zeroes": true, 00:22:59.390 "flush": true, 00:22:59.390 "reset": true, 00:22:59.390 "compare": true, 00:22:59.390 "compare_and_write": false, 00:22:59.390 "abort": true, 00:22:59.390 "nvme_admin": true, 00:22:59.390 "nvme_io": true 00:22:59.390 }, 00:22:59.390 "driver_specific": { 00:22:59.390 "nvme": [ 00:22:59.390 { 00:22:59.390 "pci_address": "0000:00:07.0", 00:22:59.390 "trid": { 00:22:59.390 "trtype": "PCIe", 00:22:59.390 "traddr": "0000:00:07.0" 00:22:59.390 }, 00:22:59.390 "ctrlr_data": { 00:22:59.390 "cntlid": 0, 00:22:59.390 "vendor_id": "0x1b36", 00:22:59.390 "model_number": "QEMU NVMe Ctrl", 00:22:59.390 "serial_number": "12341", 00:22:59.390 "firmware_revision": "8.0.0", 00:22:59.390 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:59.390 "oacs": { 00:22:59.390 "security": 0, 00:22:59.390 "format": 1, 00:22:59.390 "firmware": 0, 00:22:59.390 "ns_manage": 1 00:22:59.390 }, 00:22:59.390 "multi_ctrlr": false, 00:22:59.390 "ana_reporting": false 00:22:59.390 }, 00:22:59.390 "vs": { 00:22:59.390 "nvme_version": "1.4" 00:22:59.390 }, 00:22:59.390 "ns_data": { 00:22:59.390 "id": 1, 00:22:59.390 "can_share": false 00:22:59.390 } 00:22:59.390 } 00:22:59.390 ], 00:22:59.390 "mp_policy": "active_passive" 00:22:59.390 } 00:22:59.390 } 00:22:59.390 ]' 00:22:59.390 05:20:18 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:22:59.649 05:20:18 -- common/autotest_common.sh@1362 -- # bs=4096 00:22:59.649 05:20:18 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:22:59.649 05:20:18 -- common/autotest_common.sh@1363 -- # nb=1310720 00:22:59.649 05:20:18 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:22:59.649 05:20:18 -- common/autotest_common.sh@1367 -- # echo 5120 00:22:59.649 05:20:18 -- ftl/common.sh@63 -- # base_size=5120 00:22:59.649 05:20:18 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:59.649 05:20:18 -- ftl/common.sh@67 -- # clear_lvols 00:22:59.649 05:20:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:59.649 05:20:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:59.908 05:20:18 -- ftl/common.sh@28 -- # stores=ddf58bfc-49c4-4f62-9511-08d0873d51e1 00:22:59.908 05:20:18 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:59.908 05:20:18 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ddf58bfc-49c4-4f62-9511-08d0873d51e1 00:22:59.908 05:20:18 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:00.167 05:20:19 -- ftl/common.sh@68 -- # lvs=a34098b8-1337-4be7-844a-dc2f2b7e212b 00:23:00.167 05:20:19 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a34098b8-1337-4be7-844a-dc2f2b7e212b 00:23:00.426 05:20:19 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.426 05:20:19 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:23:00.426 05:20:19 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.426 05:20:19 -- ftl/common.sh@35 -- # local name=nvc0 00:23:00.426 05:20:19 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:23:00.426 05:20:19 -- ftl/common.sh@37 -- # local base_bdev=417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.426 05:20:19 -- ftl/common.sh@38 -- # local cache_size= 00:23:00.426 05:20:19 -- ftl/common.sh@41 -- # get_bdev_size 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.426 05:20:19 -- common/autotest_common.sh@1357 -- # local bdev_name=417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.426 05:20:19 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:00.426 05:20:19 -- common/autotest_common.sh@1359 -- # local bs 00:23:00.426 05:20:19 -- common/autotest_common.sh@1360 -- # local nb 00:23:00.426 05:20:19 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.685 05:20:19 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:00.685 { 00:23:00.685 "name": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:00.685 "aliases": [ 00:23:00.685 "lvs/nvme0n1p0" 00:23:00.685 ], 00:23:00.685 "product_name": "Logical Volume", 00:23:00.685 "block_size": 4096, 00:23:00.685 "num_blocks": 26476544, 00:23:00.685 "uuid": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:00.685 "assigned_rate_limits": { 00:23:00.685 "rw_ios_per_sec": 0, 00:23:00.685 "rw_mbytes_per_sec": 0, 00:23:00.685 "r_mbytes_per_sec": 0, 00:23:00.685 "w_mbytes_per_sec": 0 00:23:00.685 }, 00:23:00.685 "claimed": false, 00:23:00.685 "zoned": false, 00:23:00.685 "supported_io_types": { 00:23:00.685 "read": true, 00:23:00.685 "write": true, 00:23:00.685 "unmap": true, 00:23:00.685 "write_zeroes": true, 00:23:00.685 "flush": false, 00:23:00.685 "reset": true, 00:23:00.685 "compare": false, 00:23:00.685 "compare_and_write": false, 00:23:00.685 "abort": false, 00:23:00.685 "nvme_admin": false, 00:23:00.685 "nvme_io": false 00:23:00.685 }, 00:23:00.685 "driver_specific": { 00:23:00.685 "lvol": { 00:23:00.686 "lvol_store_uuid": "a34098b8-1337-4be7-844a-dc2f2b7e212b", 00:23:00.686 "base_bdev": "nvme0n1", 00:23:00.686 "thin_provision": true, 00:23:00.686 "snapshot": false, 00:23:00.686 "clone": false, 00:23:00.686 "esnap_clone": false 00:23:00.686 } 00:23:00.686 } 00:23:00.686 } 00:23:00.686 ]' 00:23:00.686 05:20:19 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:00.686 05:20:19 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:00.686 05:20:19 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:00.686 05:20:19 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:00.686 05:20:19 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:00.686 05:20:19 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:00.686 05:20:19 -- ftl/common.sh@41 -- # local base_size=5171 00:23:00.686 05:20:19 -- ftl/common.sh@44 -- # local nvc_bdev 00:23:00.686 05:20:19 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:23:00.945 05:20:19 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:00.945 05:20:19 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:00.945 05:20:19 -- ftl/common.sh@48 -- # get_bdev_size 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.945 05:20:19 -- common/autotest_common.sh@1357 -- # local bdev_name=417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:00.945 05:20:19 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:00.945 05:20:19 -- common/autotest_common.sh@1359 -- # local bs 00:23:00.945 05:20:19 -- common/autotest_common.sh@1360 -- # local nb 00:23:00.945 05:20:19 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:01.204 05:20:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:01.204 { 00:23:01.204 "name": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:01.204 "aliases": [ 00:23:01.204 "lvs/nvme0n1p0" 00:23:01.204 ], 00:23:01.204 "product_name": "Logical Volume", 00:23:01.204 "block_size": 4096, 00:23:01.204 "num_blocks": 26476544, 00:23:01.204 "uuid": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:01.204 "assigned_rate_limits": { 00:23:01.204 "rw_ios_per_sec": 0, 00:23:01.204 "rw_mbytes_per_sec": 0, 00:23:01.204 "r_mbytes_per_sec": 0, 00:23:01.204 "w_mbytes_per_sec": 0 00:23:01.204 }, 00:23:01.204 "claimed": false, 00:23:01.204 "zoned": false, 00:23:01.204 "supported_io_types": { 00:23:01.204 "read": true, 00:23:01.204 "write": true, 00:23:01.204 "unmap": true, 00:23:01.204 "write_zeroes": true, 00:23:01.204 "flush": false, 00:23:01.204 "reset": true, 00:23:01.204 "compare": false, 00:23:01.204 "compare_and_write": false, 00:23:01.204 "abort": false, 00:23:01.204 "nvme_admin": false, 00:23:01.204 "nvme_io": false 00:23:01.204 }, 00:23:01.204 "driver_specific": { 00:23:01.204 "lvol": { 00:23:01.204 "lvol_store_uuid": "a34098b8-1337-4be7-844a-dc2f2b7e212b", 00:23:01.204 "base_bdev": "nvme0n1", 00:23:01.204 "thin_provision": true, 00:23:01.204 "snapshot": false, 00:23:01.204 "clone": false, 00:23:01.204 "esnap_clone": false 00:23:01.204 } 00:23:01.204 } 00:23:01.204 } 00:23:01.204 ]' 00:23:01.204 05:20:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:01.204 05:20:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:01.204 05:20:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:01.204 05:20:20 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:01.204 05:20:20 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:01.204 05:20:20 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:01.204 05:20:20 -- ftl/common.sh@48 -- # cache_size=5171 00:23:01.204 05:20:20 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:01.463 05:20:20 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:01.463 05:20:20 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:01.463 05:20:20 -- common/autotest_common.sh@1357 -- # local bdev_name=417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:01.463 05:20:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:23:01.463 05:20:20 -- common/autotest_common.sh@1359 -- # local bs 00:23:01.463 05:20:20 -- common/autotest_common.sh@1360 -- # local nb 00:23:01.463 05:20:20 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 417644a3-1e2c-4676-86cf-27c6fdb4e20c 00:23:01.463 05:20:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:23:01.463 { 00:23:01.463 "name": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:01.463 "aliases": [ 00:23:01.463 "lvs/nvme0n1p0" 00:23:01.463 ], 00:23:01.463 "product_name": "Logical Volume", 00:23:01.463 "block_size": 4096, 00:23:01.463 "num_blocks": 26476544, 00:23:01.463 "uuid": "417644a3-1e2c-4676-86cf-27c6fdb4e20c", 00:23:01.463 "assigned_rate_limits": { 00:23:01.463 "rw_ios_per_sec": 0, 00:23:01.463 "rw_mbytes_per_sec": 0, 00:23:01.463 "r_mbytes_per_sec": 0, 00:23:01.463 "w_mbytes_per_sec": 0 00:23:01.463 }, 00:23:01.463 "claimed": false, 00:23:01.463 "zoned": false, 00:23:01.463 "supported_io_types": { 00:23:01.463 "read": true, 00:23:01.463 "write": true, 00:23:01.463 "unmap": true, 00:23:01.463 "write_zeroes": true, 00:23:01.463 "flush": false, 00:23:01.463 "reset": true, 00:23:01.463 "compare": false, 00:23:01.463 "compare_and_write": false, 00:23:01.463 "abort": false, 00:23:01.463 "nvme_admin": false, 00:23:01.463 "nvme_io": false 00:23:01.463 }, 00:23:01.463 "driver_specific": { 00:23:01.463 "lvol": { 00:23:01.463 "lvol_store_uuid": "a34098b8-1337-4be7-844a-dc2f2b7e212b", 00:23:01.463 "base_bdev": "nvme0n1", 00:23:01.463 "thin_provision": true, 00:23:01.463 "snapshot": false, 00:23:01.463 "clone": false, 00:23:01.463 "esnap_clone": false 00:23:01.463 } 00:23:01.463 } 00:23:01.463 } 00:23:01.463 ]' 00:23:01.463 05:20:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:23:01.722 05:20:20 -- common/autotest_common.sh@1362 -- # bs=4096 00:23:01.722 05:20:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:23:01.722 05:20:20 -- common/autotest_common.sh@1363 -- # nb=26476544 00:23:01.722 05:20:20 -- common/autotest_common.sh@1366 -- # bdev_size=103424 00:23:01.722 05:20:20 -- common/autotest_common.sh@1367 -- # echo 103424 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 417644a3-1e2c-4676-86cf-27c6fdb4e20c --l2p_dram_limit 10' 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:01.722 05:20:20 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 417644a3-1e2c-4676-86cf-27c6fdb4e20c --l2p_dram_limit 10 -c nvc0n1p0 00:23:01.983 [2024-07-26 05:20:20.840808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.840858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.983 [2024-07-26 05:20:20.840877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:01.983 [2024-07-26 05:20:20.840887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.840953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.840965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.983 [2024-07-26 05:20:20.840978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:01.983 [2024-07-26 05:20:20.840988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.841028] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.983 [2024-07-26 05:20:20.842239] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.983 [2024-07-26 05:20:20.842275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.842286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.983 [2024-07-26 05:20:20.842300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:23:01.983 [2024-07-26 05:20:20.842310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.842389] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cec5690a-de13-4374-abd2-4c80df841bd5 00:23:01.983 [2024-07-26 05:20:20.843791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.843829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:01.983 [2024-07-26 05:20:20.843842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:01.983 [2024-07-26 05:20:20.843855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.851364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.851400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.983 [2024-07-26 05:20:20.851412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.466 ms 00:23:01.983 [2024-07-26 05:20:20.851426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.851523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.851540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.983 [2024-07-26 05:20:20.851559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:01.983 [2024-07-26 05:20:20.851576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.851645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.851661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.983 [2024-07-26 05:20:20.851671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:01.983 [2024-07-26 05:20:20.851687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.851715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.983 [2024-07-26 05:20:20.857575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.857610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.983 [2024-07-26 05:20:20.857625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.866 ms 00:23:01.983 [2024-07-26 05:20:20.857636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.857676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.857686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.983 [2024-07-26 05:20:20.857700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:01.983 [2024-07-26 05:20:20.857710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.857748] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:01.983 [2024-07-26 05:20:20.857856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:01.983 [2024-07-26 05:20:20.857876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.983 [2024-07-26 05:20:20.857889] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:01.983 [2024-07-26 05:20:20.857904] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.983 [2024-07-26 05:20:20.857916] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.983 [2024-07-26 05:20:20.857929] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.983 [2024-07-26 05:20:20.857939] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.983 [2024-07-26 05:20:20.857952] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:01.983 [2024-07-26 05:20:20.857965] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:01.983 [2024-07-26 05:20:20.857988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.857998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.983 [2024-07-26 05:20:20.858022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:23:01.983 [2024-07-26 05:20:20.858048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.858108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.983 [2024-07-26 05:20:20.858119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.983 [2024-07-26 05:20:20.858131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:01.983 [2024-07-26 05:20:20.858141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.983 [2024-07-26 05:20:20.858216] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.983 [2024-07-26 05:20:20.858227] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.983 [2024-07-26 05:20:20.858256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.983 [2024-07-26 05:20:20.858267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.983 [2024-07-26 05:20:20.858280] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.983 [2024-07-26 05:20:20.858290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.983 [2024-07-26 05:20:20.858301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.983 [2024-07-26 05:20:20.858311] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.983 [2024-07-26 05:20:20.858323] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.983 [2024-07-26 05:20:20.858332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.983 [2024-07-26 05:20:20.858343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.983 [2024-07-26 05:20:20.858353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.983 [2024-07-26 05:20:20.858366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.983 [2024-07-26 05:20:20.858375] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.983 [2024-07-26 05:20:20.858388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:01.983 [2024-07-26 05:20:20.858396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.984 [2024-07-26 05:20:20.858419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:01.984 [2024-07-26 05:20:20.858553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858562] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:01.984 [2024-07-26 05:20:20.858574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:01.984 [2024-07-26 05:20:20.858583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.984 [2024-07-26 05:20:20.858605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.984 [2024-07-26 05:20:20.858639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.984 [2024-07-26 05:20:20.858669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.984 [2024-07-26 05:20:20.858704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858724] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.984 [2024-07-26 05:20:20.858733] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.984 [2024-07-26 05:20:20.858754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.984 [2024-07-26 05:20:20.858773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:01.984 [2024-07-26 05:20:20.858782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.984 [2024-07-26 05:20:20.858793] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.984 [2024-07-26 05:20:20.858804] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.984 [2024-07-26 05:20:20.858816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.984 [2024-07-26 05:20:20.858838] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.984 [2024-07-26 05:20:20.858848] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.984 [2024-07-26 05:20:20.858860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.984 [2024-07-26 05:20:20.858869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.984 [2024-07-26 05:20:20.858883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.984 [2024-07-26 05:20:20.858893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.984 [2024-07-26 05:20:20.858906] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.984 [2024-07-26 05:20:20.858918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.984 [2024-07-26 05:20:20.858935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.984 [2024-07-26 05:20:20.858946] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:01.984 [2024-07-26 05:20:20.858959] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:01.984 [2024-07-26 05:20:20.858969] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:01.984 [2024-07-26 05:20:20.858982] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:01.984 [2024-07-26 05:20:20.858992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:01.984 [2024-07-26 05:20:20.859005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:01.984 [2024-07-26 05:20:20.859016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:01.984 [2024-07-26 05:20:20.859028] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:01.984 [2024-07-26 05:20:20.859039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:01.984 [2024-07-26 05:20:20.859051] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:01.984 [2024-07-26 05:20:20.859062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:01.984 [2024-07-26 05:20:20.859078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:01.984 [2024-07-26 05:20:20.859089] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.984 [2024-07-26 05:20:20.859103] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.984 [2024-07-26 05:20:20.859114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.984 [2024-07-26 05:20:20.859127] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.984 [2024-07-26 05:20:20.859138] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.984 [2024-07-26 05:20:20.859151] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.984 [2024-07-26 05:20:20.859162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.859175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.984 [2024-07-26 05:20:20.859185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:23:01.984 [2024-07-26 05:20:20.859198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.884484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.884520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:01.984 [2024-07-26 05:20:20.884534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.231 ms 00:23:01.984 [2024-07-26 05:20:20.884562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.884646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.884662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:01.984 [2024-07-26 05:20:20.884673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:01.984 [2024-07-26 05:20:20.884685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.940264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.940303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:01.984 [2024-07-26 05:20:20.940316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.522 ms 00:23:01.984 [2024-07-26 05:20:20.940329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.940365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.940381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:01.984 [2024-07-26 05:20:20.940392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:01.984 [2024-07-26 05:20:20.940404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.940873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.940897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:01.984 [2024-07-26 05:20:20.940908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:23:01.984 [2024-07-26 05:20:20.940920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.941023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.941042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:01.984 [2024-07-26 05:20:20.941053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:01.984 [2024-07-26 05:20:20.941065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.966671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.966710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:01.984 [2024-07-26 05:20:20.966723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.585 ms 00:23:01.984 [2024-07-26 05:20:20.966752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:20.980517] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:01.984 [2024-07-26 05:20:20.983770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.984 [2024-07-26 05:20:20.983802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:01.984 [2024-07-26 05:20:20.983817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.917 ms 00:23:01.984 [2024-07-26 05:20:20.983843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.984 [2024-07-26 05:20:21.070484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.985 [2024-07-26 05:20:21.070546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:01.985 [2024-07-26 05:20:21.070565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.600 ms 00:23:01.985 [2024-07-26 05:20:21.070576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.985 [2024-07-26 05:20:21.070630] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:23:01.985 [2024-07-26 05:20:21.070645] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:23:05.270 [2024-07-26 05:20:23.880860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:23.880927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:05.270 [2024-07-26 05:20:23.880963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2810.206 ms 00:23:05.270 [2024-07-26 05:20:23.880975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.270 [2024-07-26 05:20:23.881174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:23.881188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:05.270 [2024-07-26 05:20:23.881201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:23:05.270 [2024-07-26 05:20:23.881227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.270 [2024-07-26 05:20:23.919620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:23.919663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:05.270 [2024-07-26 05:20:23.919698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.312 ms 00:23:05.270 [2024-07-26 05:20:23.919708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.270 [2024-07-26 05:20:23.957724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:23.957762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:05.270 [2024-07-26 05:20:23.957783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.965 ms 00:23:05.270 [2024-07-26 05:20:23.957793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.270 [2024-07-26 05:20:23.958410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:23.958466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:05.270 [2024-07-26 05:20:23.958504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:23:05.270 [2024-07-26 05:20:23.958534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.270 [2024-07-26 05:20:24.058648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.270 [2024-07-26 05:20:24.058912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:05.270 [2024-07-26 05:20:24.059046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.977 ms 00:23:05.270 [2024-07-26 05:20:24.059065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.098679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.271 [2024-07-26 05:20:24.098726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:05.271 [2024-07-26 05:20:24.098760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.559 ms 00:23:05.271 [2024-07-26 05:20:24.098774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.101133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.271 [2024-07-26 05:20:24.101164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:05.271 [2024-07-26 05:20:24.101181] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:23:05.271 [2024-07-26 05:20:24.101192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.139130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.271 [2024-07-26 05:20:24.139168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:05.271 [2024-07-26 05:20:24.139184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.868 ms 00:23:05.271 [2024-07-26 05:20:24.139209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.139292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.271 [2024-07-26 05:20:24.139307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:05.271 [2024-07-26 05:20:24.139338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:05.271 [2024-07-26 05:20:24.139348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.139455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.271 [2024-07-26 05:20:24.139467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:05.271 [2024-07-26 05:20:24.139483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:05.271 [2024-07-26 05:20:24.139493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.271 [2024-07-26 05:20:24.140579] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3299.271 ms, result 0 00:23:05.271 { 00:23:05.271 "name": "ftl0", 00:23:05.271 "uuid": "cec5690a-de13-4374-abd2-4c80df841bd5" 00:23:05.271 } 00:23:05.271 05:20:24 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:05.271 05:20:24 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:05.529 05:20:24 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:05.529 05:20:24 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:05.529 05:20:24 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:05.787 /dev/nbd0 00:23:05.788 05:20:24 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:05.788 05:20:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:05.788 05:20:24 -- common/autotest_common.sh@857 -- # local i 00:23:05.788 05:20:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:05.788 05:20:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:05.788 05:20:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:05.788 05:20:24 -- common/autotest_common.sh@861 -- # break 00:23:05.788 05:20:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:05.788 05:20:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:05.788 05:20:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:05.788 1+0 records in 00:23:05.788 1+0 records out 00:23:05.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459574 s, 8.9 MB/s 00:23:05.788 05:20:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:05.788 05:20:24 -- common/autotest_common.sh@874 -- # size=4096 00:23:05.788 05:20:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:05.788 05:20:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:05.788 05:20:24 -- common/autotest_common.sh@877 -- # return 0 00:23:05.788 05:20:24 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:05.788 [2024-07-26 05:20:24.761530] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:05.788 [2024-07-26 05:20:24.761636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76061 ] 00:23:06.046 [2024-07-26 05:20:24.926995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.305 [2024-07-26 05:20:25.223507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.791  Copying: 205/1024 [MB] (205 MBps) Copying: 412/1024 [MB] (207 MBps) Copying: 619/1024 [MB] (207 MBps) Copying: 817/1024 [MB] (197 MBps) Copying: 1021/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:23:12.791 00:23:13.049 05:20:31 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:14.948 05:20:33 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:14.948 [2024-07-26 05:20:33.795987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:14.948 [2024-07-26 05:20:33.796143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76155 ] 00:23:14.948 [2024-07-26 05:20:33.982777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.206 [2024-07-26 05:20:34.258688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.101  Copying: 18/1024 [MB] (18 MBps) Copying: 35/1024 [MB] (17 MBps) Copying: 52/1024 [MB] (16 MBps) Copying: 71/1024 [MB] (18 MBps) Copying: 90/1024 [MB] (19 MBps) Copying: 109/1024 [MB] (19 MBps) Copying: 129/1024 [MB] (19 MBps) Copying: 148/1024 [MB] (19 MBps) Copying: 168/1024 [MB] (19 MBps) Copying: 188/1024 [MB] (19 MBps) Copying: 207/1024 [MB] (19 MBps) Copying: 226/1024 [MB] (19 MBps) Copying: 246/1024 [MB] (19 MBps) Copying: 265/1024 [MB] (19 MBps) Copying: 284/1024 [MB] (18 MBps) Copying: 303/1024 [MB] (19 MBps) Copying: 322/1024 [MB] (19 MBps) Copying: 341/1024 [MB] (18 MBps) Copying: 360/1024 [MB] (18 MBps) Copying: 379/1024 [MB] (18 MBps) Copying: 398/1024 [MB] (18 MBps) Copying: 417/1024 [MB] (19 MBps) Copying: 437/1024 [MB] (19 MBps) Copying: 456/1024 [MB] (19 MBps) Copying: 476/1024 [MB] (19 MBps) Copying: 494/1024 [MB] (17 MBps) Copying: 510/1024 [MB] (16 MBps) Copying: 527/1024 [MB] (17 MBps) Copying: 544/1024 [MB] (17 MBps) Copying: 561/1024 [MB] (17 MBps) Copying: 579/1024 [MB] (17 MBps) Copying: 597/1024 [MB] (18 MBps) Copying: 615/1024 [MB] (18 MBps) Copying: 633/1024 [MB] (17 MBps) Copying: 651/1024 [MB] (17 MBps) Copying: 668/1024 [MB] (17 MBps) Copying: 686/1024 [MB] (17 MBps) Copying: 705/1024 [MB] (18 MBps) Copying: 723/1024 [MB] (18 MBps) Copying: 741/1024 [MB] (18 MBps) Copying: 760/1024 [MB] (18 MBps) Copying: 777/1024 [MB] (17 MBps) Copying: 795/1024 [MB] (17 MBps) Copying: 813/1024 [MB] (17 MBps) Copying: 831/1024 [MB] (17 MBps) Copying: 849/1024 [MB] (17 MBps) Copying: 867/1024 [MB] (18 MBps) Copying: 885/1024 [MB] (17 MBps) Copying: 903/1024 [MB] (17 MBps) Copying: 920/1024 [MB] (17 MBps) Copying: 938/1024 [MB] (17 MBps) Copying: 955/1024 [MB] (17 MBps) Copying: 971/1024 [MB] (16 MBps) Copying: 988/1024 [MB] (16 MBps) Copying: 1004/1024 [MB] (16 MBps) Copying: 1021/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 18 MBps) 00:24:13.101 00:24:13.101 05:21:32 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:13.101 05:21:32 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:13.360 05:21:32 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:13.618 [2024-07-26 05:21:32.535290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.618 [2024-07-26 05:21:32.535351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.619 [2024-07-26 05:21:32.535375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:13.619 [2024-07-26 05:21:32.535393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.535440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:13.619 [2024-07-26 05:21:32.539202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.539244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.619 [2024-07-26 05:21:32.539261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:24:13.619 [2024-07-26 05:21:32.539271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.541503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.541545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.619 [2024-07-26 05:21:32.541564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:24:13.619 [2024-07-26 05:21:32.541575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.557956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.557994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.619 [2024-07-26 05:21:32.558011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.351 ms 00:24:13.619 [2024-07-26 05:21:32.558021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.563049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.563079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:13.619 [2024-07-26 05:21:32.563093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:24:13.619 [2024-07-26 05:21:32.563118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.599699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.599734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.619 [2024-07-26 05:21:32.599751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.487 ms 00:24:13.619 [2024-07-26 05:21:32.599776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.621897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.621935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.619 [2024-07-26 05:21:32.621955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.075 ms 00:24:13.619 [2024-07-26 05:21:32.621965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.622113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.622126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:13.619 [2024-07-26 05:21:32.622140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:13.619 [2024-07-26 05:21:32.622149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.659345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.659380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:13.619 [2024-07-26 05:21:32.659397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.169 ms 00:24:13.619 [2024-07-26 05:21:32.659407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.619 [2024-07-26 05:21:32.696852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.619 [2024-07-26 05:21:32.696898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:13.619 [2024-07-26 05:21:32.696914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.400 ms 00:24:13.619 [2024-07-26 05:21:32.696940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.879 [2024-07-26 05:21:32.733692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.879 [2024-07-26 05:21:32.733728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:13.879 [2024-07-26 05:21:32.733744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.708 ms 00:24:13.879 [2024-07-26 05:21:32.733754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.879 [2024-07-26 05:21:32.769330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.879 [2024-07-26 05:21:32.769487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:13.879 [2024-07-26 05:21:32.769680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.477 ms 00:24:13.879 [2024-07-26 05:21:32.769723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.879 [2024-07-26 05:21:32.769798] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:13.879 [2024-07-26 05:21:32.769847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.769908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:13.879 [2024-07-26 05:21:32.770259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.770988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:13.880 [2024-07-26 05:21:32.771374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:13.880 [2024-07-26 05:21:32.771387] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cec5690a-de13-4374-abd2-4c80df841bd5 00:24:13.880 [2024-07-26 05:21:32.771398] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:13.880 [2024-07-26 05:21:32.771410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:13.880 [2024-07-26 05:21:32.771433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:13.880 [2024-07-26 05:21:32.771446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:13.880 [2024-07-26 05:21:32.771455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:13.880 [2024-07-26 05:21:32.771472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:13.881 [2024-07-26 05:21:32.771482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:13.881 [2024-07-26 05:21:32.771494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:13.881 [2024-07-26 05:21:32.771503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:13.881 [2024-07-26 05:21:32.771518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.881 [2024-07-26 05:21:32.771528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:13.881 [2024-07-26 05:21:32.771541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:24:13.881 [2024-07-26 05:21:32.771550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.791148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.881 [2024-07-26 05:21:32.791183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:13.881 [2024-07-26 05:21:32.791236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.539 ms 00:24:13.881 [2024-07-26 05:21:32.791246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.791494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.881 [2024-07-26 05:21:32.791506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:13.881 [2024-07-26 05:21:32.791519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:24:13.881 [2024-07-26 05:21:32.791529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.857604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.881 [2024-07-26 05:21:32.857639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.881 [2024-07-26 05:21:32.857655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.881 [2024-07-26 05:21:32.857665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.857732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.881 [2024-07-26 05:21:32.857743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.881 [2024-07-26 05:21:32.857756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.881 [2024-07-26 05:21:32.857766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.857841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.881 [2024-07-26 05:21:32.857857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.881 [2024-07-26 05:21:32.857870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.881 [2024-07-26 05:21:32.857880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.857901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.881 [2024-07-26 05:21:32.857912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.881 [2024-07-26 05:21:32.857924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.881 [2024-07-26 05:21:32.857934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.881 [2024-07-26 05:21:32.973603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.881 [2024-07-26 05:21:32.973660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.881 [2024-07-26 05:21:32.973677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.881 [2024-07-26 05:21:32.973703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.140 [2024-07-26 05:21:33.019106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.140 [2024-07-26 05:21:33.019250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.140 [2024-07-26 05:21:33.019337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.140 [2024-07-26 05:21:33.019506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.140 [2024-07-26 05:21:33.019587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.140 [2024-07-26 05:21:33.019670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.140 [2024-07-26 05:21:33.019745] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.140 [2024-07-26 05:21:33.019758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.140 [2024-07-26 05:21:33.019767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.140 [2024-07-26 05:21:33.019932] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 484.605 ms, result 0 00:24:14.140 true 00:24:14.140 05:21:33 -- ftl/dirty_shutdown.sh@83 -- # kill -9 75918 00:24:14.140 05:21:33 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid75918 00:24:14.140 05:21:33 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:14.140 [2024-07-26 05:21:33.155617] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:14.140 [2024-07-26 05:21:33.155778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76752 ] 00:24:14.406 [2024-07-26 05:21:33.337147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.676 [2024-07-26 05:21:33.564747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.183  Copying: 209/1024 [MB] (209 MBps) Copying: 423/1024 [MB] (213 MBps) Copying: 636/1024 [MB] (213 MBps) Copying: 848/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 211 MBps) 00:24:21.183 00:24:21.183 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 75918 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:21.183 05:21:40 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:21.183 [2024-07-26 05:21:40.173945] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:21.183 [2024-07-26 05:21:40.174135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76827 ] 00:24:21.442 [2024-07-26 05:21:40.356873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.700 [2024-07-26 05:21:40.586012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.959 [2024-07-26 05:21:40.983376] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.959 [2024-07-26 05:21:40.983440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:21.959 [2024-07-26 05:21:41.046588] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:21.959 [2024-07-26 05:21:41.046922] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:21.959 [2024-07-26 05:21:41.047238] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:22.218 [2024-07-26 05:21:41.306940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.218 [2024-07-26 05:21:41.306990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.218 [2024-07-26 05:21:41.307005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.218 [2024-07-26 05:21:41.307032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.218 [2024-07-26 05:21:41.307078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.218 [2024-07-26 05:21:41.307089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.218 [2024-07-26 05:21:41.307100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:22.218 [2024-07-26 05:21:41.307110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.218 [2024-07-26 05:21:41.307132] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.218 [2024-07-26 05:21:41.308300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.218 [2024-07-26 05:21:41.308327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.218 [2024-07-26 05:21:41.308338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.218 [2024-07-26 05:21:41.308352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:24:22.218 [2024-07-26 05:21:41.308361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.218 [2024-07-26 05:21:41.309802] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:22.478 [2024-07-26 05:21:41.329808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.329857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:22.478 [2024-07-26 05:21:41.329872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.007 ms 00:24:22.478 [2024-07-26 05:21:41.329883] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.329945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.329958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:22.478 [2024-07-26 05:21:41.329969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:22.478 [2024-07-26 05:21:41.329982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.336659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.336687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.478 [2024-07-26 05:21:41.336698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.605 ms 00:24:22.478 [2024-07-26 05:21:41.336708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.336798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.336811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.478 [2024-07-26 05:21:41.336835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:22.478 [2024-07-26 05:21:41.336844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.336881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.336892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.478 [2024-07-26 05:21:41.336902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:22.478 [2024-07-26 05:21:41.336911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.336938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.478 [2024-07-26 05:21:41.342677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.342705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.478 [2024-07-26 05:21:41.342716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 00:24:22.478 [2024-07-26 05:21:41.342725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.342756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.342766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.478 [2024-07-26 05:21:41.342780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:22.478 [2024-07-26 05:21:41.342789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.342835] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:22.478 [2024-07-26 05:21:41.342858] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:22.478 [2024-07-26 05:21:41.342889] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:22.478 [2024-07-26 05:21:41.342906] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:22.478 [2024-07-26 05:21:41.342968] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:22.478 [2024-07-26 05:21:41.342983] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.478 [2024-07-26 05:21:41.342995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:22.478 [2024-07-26 05:21:41.343007] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.478 [2024-07-26 05:21:41.343018] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.478 [2024-07-26 05:21:41.343028] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.478 [2024-07-26 05:21:41.343037] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.478 [2024-07-26 05:21:41.343046] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:22.478 [2024-07-26 05:21:41.343055] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:22.478 [2024-07-26 05:21:41.343065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.343074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.478 [2024-07-26 05:21:41.343086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:24:22.478 [2024-07-26 05:21:41.343095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.478 [2024-07-26 05:21:41.343147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.478 [2024-07-26 05:21:41.343157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.478 [2024-07-26 05:21:41.343166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:22.478 [2024-07-26 05:21:41.343175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.343268] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.479 [2024-07-26 05:21:41.343281] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.479 [2024-07-26 05:21:41.343292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.479 [2024-07-26 05:21:41.343324] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.479 [2024-07-26 05:21:41.343353] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.479 [2024-07-26 05:21:41.343371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.479 [2024-07-26 05:21:41.343380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.479 [2024-07-26 05:21:41.343389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.479 [2024-07-26 05:21:41.343397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.479 [2024-07-26 05:21:41.343406] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:22.479 [2024-07-26 05:21:41.343415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343423] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.479 [2024-07-26 05:21:41.343443] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:22.479 [2024-07-26 05:21:41.343458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:22.479 [2024-07-26 05:21:41.343476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:22.479 [2024-07-26 05:21:41.343485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343495] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.479 [2024-07-26 05:21:41.343504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.479 [2024-07-26 05:21:41.343529] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343547] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.479 [2024-07-26 05:21:41.343556] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.479 [2024-07-26 05:21:41.343581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343598] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.479 [2024-07-26 05:21:41.343623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.479 [2024-07-26 05:21:41.343641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.479 [2024-07-26 05:21:41.343650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:22.479 [2024-07-26 05:21:41.343659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.479 [2024-07-26 05:21:41.343668] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.479 [2024-07-26 05:21:41.343678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.479 [2024-07-26 05:21:41.343687] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.479 [2024-07-26 05:21:41.343706] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.479 [2024-07-26 05:21:41.343716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.479 [2024-07-26 05:21:41.343724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.479 [2024-07-26 05:21:41.343734] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.479 [2024-07-26 05:21:41.343743] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.479 [2024-07-26 05:21:41.343752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.479 [2024-07-26 05:21:41.343761] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.479 [2024-07-26 05:21:41.343773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.479 [2024-07-26 05:21:41.343784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.479 [2024-07-26 05:21:41.343794] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:22.479 [2024-07-26 05:21:41.343804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:22.479 [2024-07-26 05:21:41.343815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:22.479 [2024-07-26 05:21:41.343825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:22.479 [2024-07-26 05:21:41.343835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:22.479 [2024-07-26 05:21:41.343845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:22.479 [2024-07-26 05:21:41.343856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:22.479 [2024-07-26 05:21:41.343871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:22.479 [2024-07-26 05:21:41.343881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:22.479 [2024-07-26 05:21:41.343891] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:22.479 [2024-07-26 05:21:41.343901] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:22.479 [2024-07-26 05:21:41.343912] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:22.479 [2024-07-26 05:21:41.343922] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.479 [2024-07-26 05:21:41.343933] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.479 [2024-07-26 05:21:41.343943] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.479 [2024-07-26 05:21:41.343954] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.479 [2024-07-26 05:21:41.343964] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.479 [2024-07-26 05:21:41.343974] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.479 [2024-07-26 05:21:41.343985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.343997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.479 [2024-07-26 05:21:41.344011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:24:22.479 [2024-07-26 05:21:41.344021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.366995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.367026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.479 [2024-07-26 05:21:41.367042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.935 ms 00:24:22.479 [2024-07-26 05:21:41.367051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.367124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.367134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.479 [2024-07-26 05:21:41.367143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:22.479 [2024-07-26 05:21:41.367153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.431438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.431485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.479 [2024-07-26 05:21:41.431499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.238 ms 00:24:22.479 [2024-07-26 05:21:41.431509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.431542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.431552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.479 [2024-07-26 05:21:41.431563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:22.479 [2024-07-26 05:21:41.431572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.432016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.432033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.479 [2024-07-26 05:21:41.432043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:24:22.479 [2024-07-26 05:21:41.432053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.432155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.479 [2024-07-26 05:21:41.432167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.479 [2024-07-26 05:21:41.432178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:22.479 [2024-07-26 05:21:41.432187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.479 [2024-07-26 05:21:41.453907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.453938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.480 [2024-07-26 05:21:41.453951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.678 ms 00:24:22.480 [2024-07-26 05:21:41.453961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.480 [2024-07-26 05:21:41.472362] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:22.480 [2024-07-26 05:21:41.472395] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:22.480 [2024-07-26 05:21:41.472412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.472438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:22.480 [2024-07-26 05:21:41.472449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.351 ms 00:24:22.480 [2024-07-26 05:21:41.472459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.480 [2024-07-26 05:21:41.502507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.502542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:22.480 [2024-07-26 05:21:41.502555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.009 ms 00:24:22.480 [2024-07-26 05:21:41.502565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.480 [2024-07-26 05:21:41.521409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.521442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:22.480 [2024-07-26 05:21:41.521455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.799 ms 00:24:22.480 [2024-07-26 05:21:41.521480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.480 [2024-07-26 05:21:41.539770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.539816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:22.480 [2024-07-26 05:21:41.539855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.253 ms 00:24:22.480 [2024-07-26 05:21:41.539864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.480 [2024-07-26 05:21:41.540392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.480 [2024-07-26 05:21:41.540408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.480 [2024-07-26 05:21:41.540420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:24:22.480 [2024-07-26 05:21:41.540429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.738 [2024-07-26 05:21:41.630791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.738 [2024-07-26 05:21:41.630852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.738 [2024-07-26 05:21:41.630883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.340 ms 00:24:22.738 [2024-07-26 05:21:41.630894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.643371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.739 [2024-07-26 05:21:41.646246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.646279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.739 [2024-07-26 05:21:41.646292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.292 ms 00:24:22.739 [2024-07-26 05:21:41.646303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.646387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.646399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.739 [2024-07-26 05:21:41.646411] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:22.739 [2024-07-26 05:21:41.646421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.646488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.646501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.739 [2024-07-26 05:21:41.646514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:22.739 [2024-07-26 05:21:41.646524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.648640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.648670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:22.739 [2024-07-26 05:21:41.648682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:24:22.739 [2024-07-26 05:21:41.648692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.648728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.648740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.739 [2024-07-26 05:21:41.648750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:22.739 [2024-07-26 05:21:41.648760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.648798] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.739 [2024-07-26 05:21:41.648810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.648820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.739 [2024-07-26 05:21:41.648829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:22.739 [2024-07-26 05:21:41.648839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.687535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.687572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.739 [2024-07-26 05:21:41.687592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.677 ms 00:24:22.739 [2024-07-26 05:21:41.687602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.687670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.739 [2024-07-26 05:21:41.687682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.739 [2024-07-26 05:21:41.687693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:22.739 [2024-07-26 05:21:41.687703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.739 [2024-07-26 05:21:41.689056] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.641 ms, result 0 00:24:55.567  Copying: 31/1024 [MB] (31 MBps) Copying: 63/1024 [MB] (31 MBps) Copying: 95/1024 [MB] (32 MBps) Copying: 126/1024 [MB] (30 MBps) Copying: 157/1024 [MB] (31 MBps) Copying: 188/1024 [MB] (31 MBps) Copying: 220/1024 [MB] (32 MBps) Copying: 252/1024 [MB] (32 MBps) Copying: 285/1024 [MB] (32 MBps) Copying: 317/1024 [MB] (32 MBps) Copying: 349/1024 [MB] (32 MBps) Copying: 382/1024 [MB] (32 MBps) Copying: 414/1024 [MB] (32 MBps) Copying: 446/1024 [MB] (31 MBps) Copying: 479/1024 [MB] (32 MBps) Copying: 509/1024 [MB] (30 MBps) Copying: 542/1024 [MB] (32 MBps) Copying: 574/1024 [MB] (32 MBps) Copying: 605/1024 [MB] (31 MBps) Copying: 638/1024 [MB] (32 MBps) Copying: 669/1024 [MB] (31 MBps) Copying: 701/1024 [MB] (32 MBps) Copying: 734/1024 [MB] (33 MBps) Copying: 767/1024 [MB] (32 MBps) Copying: 800/1024 [MB] (32 MBps) Copying: 833/1024 [MB] (33 MBps) Copying: 865/1024 [MB] (32 MBps) Copying: 897/1024 [MB] (31 MBps) Copying: 930/1024 [MB] (33 MBps) Copying: 964/1024 [MB] (33 MBps) Copying: 996/1024 [MB] (32 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-26 05:22:14.409620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.409708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.567 [2024-07-26 05:22:14.409742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:55.567 [2024-07-26 05:22:14.409760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.411727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.567 [2024-07-26 05:22:14.418841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.418877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.567 [2024-07-26 05:22:14.418890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.069 ms 00:24:55.567 [2024-07-26 05:22:14.418916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.428549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.428590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.567 [2024-07-26 05:22:14.428604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.818 ms 00:24:55.567 [2024-07-26 05:22:14.428614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.448371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.448423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.567 [2024-07-26 05:22:14.448440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.739 ms 00:24:55.567 [2024-07-26 05:22:14.448452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.453936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.453977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:55.567 [2024-07-26 05:22:14.453989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.448 ms 00:24:55.567 [2024-07-26 05:22:14.454000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.492399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.492435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.567 [2024-07-26 05:22:14.492448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.337 ms 00:24:55.567 [2024-07-26 05:22:14.492458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.514313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.514350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.567 [2024-07-26 05:22:14.514364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.818 ms 00:24:55.567 [2024-07-26 05:22:14.514375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.606172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.606230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.567 [2024-07-26 05:22:14.606246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.755 ms 00:24:55.567 [2024-07-26 05:22:14.606264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.567 [2024-07-26 05:22:14.644439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.567 [2024-07-26 05:22:14.644474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:55.568 [2024-07-26 05:22:14.644487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.155 ms 00:24:55.568 [2024-07-26 05:22:14.644496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.834 [2024-07-26 05:22:14.683324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.834 [2024-07-26 05:22:14.683360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:55.834 [2024-07-26 05:22:14.683374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.788 ms 00:24:55.834 [2024-07-26 05:22:14.683397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.834 [2024-07-26 05:22:14.720761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.834 [2024-07-26 05:22:14.720796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.834 [2024-07-26 05:22:14.720809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.326 ms 00:24:55.834 [2024-07-26 05:22:14.720818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.834 [2024-07-26 05:22:14.758763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.834 [2024-07-26 05:22:14.758799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.834 [2024-07-26 05:22:14.758811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.858 ms 00:24:55.834 [2024-07-26 05:22:14.758820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.834 [2024-07-26 05:22:14.758856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.834 [2024-07-26 05:22:14.758879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108800 / 261120 wr_cnt: 1 state: open 00:24:55.834 [2024-07-26 05:22:14.758895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.758991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.834 [2024-07-26 05:22:14.759546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.835 [2024-07-26 05:22:14.759994] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.835 [2024-07-26 05:22:14.760004] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cec5690a-de13-4374-abd2-4c80df841bd5 00:24:55.835 [2024-07-26 05:22:14.760015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108800 00:24:55.835 [2024-07-26 05:22:14.760028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109760 00:24:55.835 [2024-07-26 05:22:14.760038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108800 00:24:55.835 [2024-07-26 05:22:14.760048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:24:55.835 [2024-07-26 05:22:14.760058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.835 [2024-07-26 05:22:14.760068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.835 [2024-07-26 05:22:14.760078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.835 [2024-07-26 05:22:14.760087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.835 [2024-07-26 05:22:14.760106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.835 [2024-07-26 05:22:14.760117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.835 [2024-07-26 05:22:14.760128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.835 [2024-07-26 05:22:14.760138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:24:55.835 [2024-07-26 05:22:14.760148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.780270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.835 [2024-07-26 05:22:14.780303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.835 [2024-07-26 05:22:14.780315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.076 ms 00:24:55.835 [2024-07-26 05:22:14.780325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.780577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.835 [2024-07-26 05:22:14.780588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.835 [2024-07-26 05:22:14.780598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:24:55.835 [2024-07-26 05:22:14.780608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.834809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.835 [2024-07-26 05:22:14.834844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.835 [2024-07-26 05:22:14.834857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.835 [2024-07-26 05:22:14.834884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.834939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.835 [2024-07-26 05:22:14.834950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.835 [2024-07-26 05:22:14.834960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.835 [2024-07-26 05:22:14.834970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.835049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.835 [2024-07-26 05:22:14.835063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.835 [2024-07-26 05:22:14.835073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.835 [2024-07-26 05:22:14.835083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.835 [2024-07-26 05:22:14.835101] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.835 [2024-07-26 05:22:14.835111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.835 [2024-07-26 05:22:14.835121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.835 [2024-07-26 05:22:14.835132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:14.955038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:14.955086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:56.095 [2024-07-26 05:22:14.955101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:14.955128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.001844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.001892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:56.095 [2024-07-26 05:22:15.001907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.001918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:56.095 [2024-07-26 05:22:15.002028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:56.095 [2024-07-26 05:22:15.002104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:56.095 [2024-07-26 05:22:15.002269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:56.095 [2024-07-26 05:22:15.002339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002400] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:56.095 [2024-07-26 05:22:15.002410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:56.095 [2024-07-26 05:22:15.002473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:56.095 [2024-07-26 05:22:15.002483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:56.095 [2024-07-26 05:22:15.002493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:56.095 [2024-07-26 05:22:15.002610] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.766 ms, result 0 00:24:57.474 00:24:57.474 00:24:57.733 05:22:16 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:59.639 05:22:18 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:59.639 [2024-07-26 05:22:18.439252] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:59.639 [2024-07-26 05:22:18.439378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77210 ] 00:24:59.639 [2024-07-26 05:22:18.601303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.898 [2024-07-26 05:22:18.853062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.157 [2024-07-26 05:22:19.254458] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.157 [2024-07-26 05:22:19.254540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:00.417 [2024-07-26 05:22:19.412406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.412460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:00.417 [2024-07-26 05:22:19.412475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:00.417 [2024-07-26 05:22:19.412502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.412554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.412566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.417 [2024-07-26 05:22:19.412577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:00.417 [2024-07-26 05:22:19.412587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.412607] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:00.417 [2024-07-26 05:22:19.413839] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:00.417 [2024-07-26 05:22:19.413866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.413877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.417 [2024-07-26 05:22:19.413887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:25:00.417 [2024-07-26 05:22:19.413897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.415344] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:00.417 [2024-07-26 05:22:19.435750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.435789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:00.417 [2024-07-26 05:22:19.435808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.407 ms 00:25:00.417 [2024-07-26 05:22:19.435818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.435881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.435892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:00.417 [2024-07-26 05:22:19.435903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:00.417 [2024-07-26 05:22:19.435913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.442761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.442789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.417 [2024-07-26 05:22:19.442801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.781 ms 00:25:00.417 [2024-07-26 05:22:19.442827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.442915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.442928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.417 [2024-07-26 05:22:19.442939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:00.417 [2024-07-26 05:22:19.442949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.442989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.443004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:00.417 [2024-07-26 05:22:19.443015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:00.417 [2024-07-26 05:22:19.443025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.443054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:00.417 [2024-07-26 05:22:19.448983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.449015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.417 [2024-07-26 05:22:19.449027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.938 ms 00:25:00.417 [2024-07-26 05:22:19.449038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.449072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.449083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:00.417 [2024-07-26 05:22:19.449094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:00.417 [2024-07-26 05:22:19.449103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.449153] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:00.417 [2024-07-26 05:22:19.449180] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:00.417 [2024-07-26 05:22:19.449228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:00.417 [2024-07-26 05:22:19.449263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:00.417 [2024-07-26 05:22:19.449341] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:00.417 [2024-07-26 05:22:19.449354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:00.417 [2024-07-26 05:22:19.449367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:00.417 [2024-07-26 05:22:19.449391] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:00.417 [2024-07-26 05:22:19.449403] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:00.417 [2024-07-26 05:22:19.449418] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:00.417 [2024-07-26 05:22:19.449428] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:00.417 [2024-07-26 05:22:19.449438] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:00.417 [2024-07-26 05:22:19.449447] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:00.417 [2024-07-26 05:22:19.449458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.449468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:00.417 [2024-07-26 05:22:19.449479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:25:00.417 [2024-07-26 05:22:19.449489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.449545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.417 [2024-07-26 05:22:19.449556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:00.417 [2024-07-26 05:22:19.449569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:00.417 [2024-07-26 05:22:19.449579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.417 [2024-07-26 05:22:19.449644] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:00.417 [2024-07-26 05:22:19.449657] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:00.417 [2024-07-26 05:22:19.449667] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.417 [2024-07-26 05:22:19.449677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.417 [2024-07-26 05:22:19.449687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:00.417 [2024-07-26 05:22:19.449696] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:00.417 [2024-07-26 05:22:19.449705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:00.417 [2024-07-26 05:22:19.449714] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:00.417 [2024-07-26 05:22:19.449724] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:00.417 [2024-07-26 05:22:19.449733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.417 [2024-07-26 05:22:19.449743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:00.417 [2024-07-26 05:22:19.449752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:00.417 [2024-07-26 05:22:19.449761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:00.417 [2024-07-26 05:22:19.449771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:00.417 [2024-07-26 05:22:19.449779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:00.417 [2024-07-26 05:22:19.449789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.417 [2024-07-26 05:22:19.449797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:00.417 [2024-07-26 05:22:19.449807] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:00.417 [2024-07-26 05:22:19.449815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:00.418 [2024-07-26 05:22:19.449833] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:00.418 [2024-07-26 05:22:19.449853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:00.418 [2024-07-26 05:22:19.449863] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:00.418 [2024-07-26 05:22:19.449872] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:00.418 [2024-07-26 05:22:19.449890] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:00.418 [2024-07-26 05:22:19.449899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:00.418 [2024-07-26 05:22:19.449917] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:00.418 [2024-07-26 05:22:19.449926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:00.418 [2024-07-26 05:22:19.449944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:00.418 [2024-07-26 05:22:19.449953] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:00.418 [2024-07-26 05:22:19.449972] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:00.418 [2024-07-26 05:22:19.449981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:00.418 [2024-07-26 05:22:19.449990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.418 [2024-07-26 05:22:19.449999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:00.418 [2024-07-26 05:22:19.450008] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:00.418 [2024-07-26 05:22:19.450017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:00.418 [2024-07-26 05:22:19.450025] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:00.418 [2024-07-26 05:22:19.450035] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:00.418 [2024-07-26 05:22:19.450045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:00.418 [2024-07-26 05:22:19.450057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:00.418 [2024-07-26 05:22:19.450068] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:00.418 [2024-07-26 05:22:19.450078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:00.418 [2024-07-26 05:22:19.450087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:00.418 [2024-07-26 05:22:19.450097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:00.418 [2024-07-26 05:22:19.450106] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:00.418 [2024-07-26 05:22:19.450115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:00.418 [2024-07-26 05:22:19.450125] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:00.418 [2024-07-26 05:22:19.450137] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.418 [2024-07-26 05:22:19.450148] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:00.418 [2024-07-26 05:22:19.450162] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:00.418 [2024-07-26 05:22:19.450173] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:00.418 [2024-07-26 05:22:19.450184] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:00.418 [2024-07-26 05:22:19.450194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:00.418 [2024-07-26 05:22:19.450204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:00.418 [2024-07-26 05:22:19.450224] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:00.418 [2024-07-26 05:22:19.450234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:00.418 [2024-07-26 05:22:19.450244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:00.418 [2024-07-26 05:22:19.450254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:00.418 [2024-07-26 05:22:19.450265] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:00.418 [2024-07-26 05:22:19.450275] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:00.418 [2024-07-26 05:22:19.450286] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:00.418 [2024-07-26 05:22:19.450296] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:00.418 [2024-07-26 05:22:19.450307] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:00.418 [2024-07-26 05:22:19.450318] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:00.418 [2024-07-26 05:22:19.450328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:00.418 [2024-07-26 05:22:19.450339] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:00.418 [2024-07-26 05:22:19.450349] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:00.418 [2024-07-26 05:22:19.450359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.418 [2024-07-26 05:22:19.450370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:00.418 [2024-07-26 05:22:19.450380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:25:00.418 [2024-07-26 05:22:19.450389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.418 [2024-07-26 05:22:19.474903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.418 [2024-07-26 05:22:19.474934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.418 [2024-07-26 05:22:19.474947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.469 ms 00:25:00.418 [2024-07-26 05:22:19.474957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.418 [2024-07-26 05:22:19.475030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.418 [2024-07-26 05:22:19.475043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:00.418 [2024-07-26 05:22:19.475053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:00.418 [2024-07-26 05:22:19.475063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.540099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.540132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.677 [2024-07-26 05:22:19.540145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.988 ms 00:25:00.677 [2024-07-26 05:22:19.540157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.540191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.540201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.677 [2024-07-26 05:22:19.540223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:00.677 [2024-07-26 05:22:19.540232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.540722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.540735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.677 [2024-07-26 05:22:19.540745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:25:00.677 [2024-07-26 05:22:19.540756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.540866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.540885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.677 [2024-07-26 05:22:19.540896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:00.677 [2024-07-26 05:22:19.540905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.563517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.563549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.677 [2024-07-26 05:22:19.563561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.591 ms 00:25:00.677 [2024-07-26 05:22:19.563572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.582712] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:00.677 [2024-07-26 05:22:19.582746] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.677 [2024-07-26 05:22:19.582760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.582770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.677 [2024-07-26 05:22:19.582781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:25:00.677 [2024-07-26 05:22:19.582790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.612428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.612474] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.677 [2024-07-26 05:22:19.612487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.598 ms 00:25:00.677 [2024-07-26 05:22:19.612513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.677 [2024-07-26 05:22:19.630614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.677 [2024-07-26 05:22:19.630648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.677 [2024-07-26 05:22:19.630659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.053 ms 00:25:00.677 [2024-07-26 05:22:19.630668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.649092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.649125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.678 [2024-07-26 05:22:19.649137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.386 ms 00:25:00.678 [2024-07-26 05:22:19.649145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.649679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.649698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.678 [2024-07-26 05:22:19.649709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:25:00.678 [2024-07-26 05:22:19.649719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.739789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.739846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.678 [2024-07-26 05:22:19.739861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.049 ms 00:25:00.678 [2024-07-26 05:22:19.739871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.751908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.678 [2024-07-26 05:22:19.754745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.754771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.678 [2024-07-26 05:22:19.754784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.818 ms 00:25:00.678 [2024-07-26 05:22:19.754793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.754876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.754892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.678 [2024-07-26 05:22:19.754903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.678 [2024-07-26 05:22:19.754913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.756272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.756307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.678 [2024-07-26 05:22:19.756319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:25:00.678 [2024-07-26 05:22:19.756329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.758431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.758457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:00.678 [2024-07-26 05:22:19.758472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:25:00.678 [2024-07-26 05:22:19.758482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.758527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.758538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.678 [2024-07-26 05:22:19.758548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.678 [2024-07-26 05:22:19.758563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.678 [2024-07-26 05:22:19.758598] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.678 [2024-07-26 05:22:19.758610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.678 [2024-07-26 05:22:19.758619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.678 [2024-07-26 05:22:19.758629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:00.678 [2024-07-26 05:22:19.758641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.937 [2024-07-26 05:22:19.797525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.937 [2024-07-26 05:22:19.797564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.937 [2024-07-26 05:22:19.797578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.863 ms 00:25:00.937 [2024-07-26 05:22:19.797589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.937 [2024-07-26 05:22:19.797659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.937 [2024-07-26 05:22:19.797677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.937 [2024-07-26 05:22:19.797688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:00.937 [2024-07-26 05:22:19.797698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.937 [2024-07-26 05:22:19.803744] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.003 ms, result 0 00:25:28.779  Copying: 1168/1048576 [kB] (1168 kBps) Copying: 7260/1048576 [kB] (6092 kBps) Copying: 46/1024 [MB] (39 MBps) Copying: 87/1024 [MB] (40 MBps) Copying: 128/1024 [MB] (41 MBps) Copying: 170/1024 [MB] (41 MBps) Copying: 212/1024 [MB] (41 MBps) Copying: 253/1024 [MB] (41 MBps) Copying: 295/1024 [MB] (41 MBps) Copying: 335/1024 [MB] (40 MBps) Copying: 376/1024 [MB] (40 MBps) Copying: 416/1024 [MB] (39 MBps) Copying: 456/1024 [MB] (40 MBps) Copying: 496/1024 [MB] (40 MBps) Copying: 535/1024 [MB] (39 MBps) Copying: 575/1024 [MB] (39 MBps) Copying: 615/1024 [MB] (40 MBps) Copying: 645/1024 [MB] (30 MBps) Copying: 684/1024 [MB] (38 MBps) Copying: 724/1024 [MB] (39 MBps) Copying: 763/1024 [MB] (39 MBps) Copying: 803/1024 [MB] (39 MBps) Copying: 844/1024 [MB] (40 MBps) Copying: 884/1024 [MB] (40 MBps) Copying: 924/1024 [MB] (39 MBps) Copying: 965/1024 [MB] (41 MBps) Copying: 1006/1024 [MB] (40 MBps) Copying: 1024/1024 [MB] (average 37 MBps)[2024-07-26 05:22:47.761925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.762520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.779 [2024-07-26 05:22:47.762574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.779 [2024-07-26 05:22:47.762595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.762651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.779 [2024-07-26 05:22:47.769572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.769619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:28.779 [2024-07-26 05:22:47.769639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.887 ms 00:25:28.779 [2024-07-26 05:22:47.769655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.770010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.770037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:28.779 [2024-07-26 05:22:47.770068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:25:28.779 [2024-07-26 05:22:47.770091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.783138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.783227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:28.779 [2024-07-26 05:22:47.783251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.011 ms 00:25:28.779 [2024-07-26 05:22:47.783266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.791674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.791721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:28.779 [2024-07-26 05:22:47.791740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.358 ms 00:25:28.779 [2024-07-26 05:22:47.791764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.852264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.852323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:28.779 [2024-07-26 05:22:47.852345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.428 ms 00:25:28.779 [2024-07-26 05:22:47.852362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.779 [2024-07-26 05:22:47.885283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.779 [2024-07-26 05:22:47.885344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:28.779 [2024-07-26 05:22:47.885367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.860 ms 00:25:28.779 [2024-07-26 05:22:47.885383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:47.889572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.040 [2024-07-26 05:22:47.889621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.040 [2024-07-26 05:22:47.889641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:25:29.040 [2024-07-26 05:22:47.889657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:47.951836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.040 [2024-07-26 05:22:47.951901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:29.040 [2024-07-26 05:22:47.951924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.143 ms 00:25:29.040 [2024-07-26 05:22:47.951939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:48.013066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.040 [2024-07-26 05:22:48.013130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:29.040 [2024-07-26 05:22:48.013153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.065 ms 00:25:29.040 [2024-07-26 05:22:48.013169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:48.073092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.040 [2024-07-26 05:22:48.073157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.040 [2024-07-26 05:22:48.073179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.846 ms 00:25:29.040 [2024-07-26 05:22:48.073195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:48.133099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.040 [2024-07-26 05:22:48.133173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.040 [2024-07-26 05:22:48.133202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.745 ms 00:25:29.040 [2024-07-26 05:22:48.133244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.040 [2024-07-26 05:22:48.133313] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.040 [2024-07-26 05:22:48.133346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:29.040 [2024-07-26 05:22:48.133374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:29.040 [2024-07-26 05:22:48.133412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.133994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.040 [2024-07-26 05:22:48.134402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.134988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.041 [2024-07-26 05:22:48.135329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.041 [2024-07-26 05:22:48.135345] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cec5690a-de13-4374-abd2-4c80df841bd5 00:25:29.041 [2024-07-26 05:22:48.135362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:29.041 [2024-07-26 05:22:48.135379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158144 00:25:29.041 [2024-07-26 05:22:48.135394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156160 00:25:29.041 [2024-07-26 05:22:48.135419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:25:29.041 [2024-07-26 05:22:48.135434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.041 [2024-07-26 05:22:48.135450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.041 [2024-07-26 05:22:48.135465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.041 [2024-07-26 05:22:48.135479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.041 [2024-07-26 05:22:48.135493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.041 [2024-07-26 05:22:48.135508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.041 [2024-07-26 05:22:48.135524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.041 [2024-07-26 05:22:48.135541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:25:29.041 [2024-07-26 05:22:48.135556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.165418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.301 [2024-07-26 05:22:48.165464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.301 [2024-07-26 05:22:48.165485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.773 ms 00:25:29.301 [2024-07-26 05:22:48.165496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.165777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.301 [2024-07-26 05:22:48.165789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.301 [2024-07-26 05:22:48.165801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:25:29.301 [2024-07-26 05:22:48.165824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.223664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.301 [2024-07-26 05:22:48.223716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.301 [2024-07-26 05:22:48.223732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.301 [2024-07-26 05:22:48.223744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.223813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.301 [2024-07-26 05:22:48.223825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.301 [2024-07-26 05:22:48.223836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.301 [2024-07-26 05:22:48.223847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.223938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.301 [2024-07-26 05:22:48.223958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.301 [2024-07-26 05:22:48.223969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.301 [2024-07-26 05:22:48.223980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.224000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.301 [2024-07-26 05:22:48.224011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.301 [2024-07-26 05:22:48.224022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.301 [2024-07-26 05:22:48.224032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.301 [2024-07-26 05:22:48.348016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.301 [2024-07-26 05:22:48.348080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.301 [2024-07-26 05:22:48.348096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.301 [2024-07-26 05:22:48.348108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.425330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.425411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.562 [2024-07-26 05:22:48.425434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.425452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.425579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.425599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.562 [2024-07-26 05:22:48.425624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.425640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.425707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.425725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.562 [2024-07-26 05:22:48.425741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.425757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.425936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.425956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.562 [2024-07-26 05:22:48.425973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.425994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.426051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.426070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.562 [2024-07-26 05:22:48.426086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.426102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.426152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.426169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.562 [2024-07-26 05:22:48.426185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.426233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.426297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.562 [2024-07-26 05:22:48.426328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.562 [2024-07-26 05:22:48.426350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.562 [2024-07-26 05:22:48.426366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.562 [2024-07-26 05:22:48.426533] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 664.568 ms, result 0 00:25:30.941 00:25:30.941 00:25:30.941 05:22:49 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.844 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:32.844 05:22:51 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.844 [2024-07-26 05:22:51.903653] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:32.844 [2024-07-26 05:22:51.903768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77548 ] 00:25:33.103 [2024-07-26 05:22:52.073333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.362 [2024-07-26 05:22:52.414561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.931 [2024-07-26 05:22:52.829463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:33.931 [2024-07-26 05:22:52.829535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:33.931 [2024-07-26 05:22:52.984310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:52.984356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:33.931 [2024-07-26 05:22:52.984371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:33.931 [2024-07-26 05:22:52.984383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:52.984432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:52.984445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.931 [2024-07-26 05:22:52.984456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:33.931 [2024-07-26 05:22:52.984466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:52.984486] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:33.931 [2024-07-26 05:22:52.985656] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:33.931 [2024-07-26 05:22:52.985680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:52.985691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.931 [2024-07-26 05:22:52.985702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:25:33.931 [2024-07-26 05:22:52.985713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:52.987074] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:33.931 [2024-07-26 05:22:53.006823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.006858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:33.931 [2024-07-26 05:22:53.006892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.750 ms 00:25:33.931 [2024-07-26 05:22:53.006903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.006964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.006976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:33.931 [2024-07-26 05:22:53.006987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:33.931 [2024-07-26 05:22:53.006997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.013713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.013742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.931 [2024-07-26 05:22:53.013753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:25:33.931 [2024-07-26 05:22:53.013764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.013854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.013868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.931 [2024-07-26 05:22:53.013879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:33.931 [2024-07-26 05:22:53.013889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.013933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.013948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:33.931 [2024-07-26 05:22:53.013959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:33.931 [2024-07-26 05:22:53.013969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.013998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.931 [2024-07-26 05:22:53.019807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.019837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.931 [2024-07-26 05:22:53.019849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.818 ms 00:25:33.931 [2024-07-26 05:22:53.019859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.019892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.931 [2024-07-26 05:22:53.019902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:33.931 [2024-07-26 05:22:53.019913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:33.931 [2024-07-26 05:22:53.019922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.931 [2024-07-26 05:22:53.019970] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:33.931 [2024-07-26 05:22:53.019998] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:33.931 [2024-07-26 05:22:53.020030] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:33.931 [2024-07-26 05:22:53.020047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:33.932 [2024-07-26 05:22:53.020112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:33.932 [2024-07-26 05:22:53.020125] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:33.932 [2024-07-26 05:22:53.020138] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:33.932 [2024-07-26 05:22:53.020151] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020163] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020177] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:33.932 [2024-07-26 05:22:53.020187] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:33.932 [2024-07-26 05:22:53.020197] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:33.932 [2024-07-26 05:22:53.020221] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:33.932 [2024-07-26 05:22:53.020232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.932 [2024-07-26 05:22:53.020259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:33.932 [2024-07-26 05:22:53.020270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:33.932 [2024-07-26 05:22:53.020279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.932 [2024-07-26 05:22:53.020335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.932 [2024-07-26 05:22:53.020346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:33.932 [2024-07-26 05:22:53.020359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:33.932 [2024-07-26 05:22:53.020369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.932 [2024-07-26 05:22:53.020434] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:33.932 [2024-07-26 05:22:53.020446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:33.932 [2024-07-26 05:22:53.020458] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020480] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:33.932 [2024-07-26 05:22:53.020489] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020508] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:33.932 [2024-07-26 05:22:53.020517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.932 [2024-07-26 05:22:53.020537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:33.932 [2024-07-26 05:22:53.020548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:33.932 [2024-07-26 05:22:53.020557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:33.932 [2024-07-26 05:22:53.020567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:33.932 [2024-07-26 05:22:53.020576] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:33.932 [2024-07-26 05:22:53.020585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020595] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:33.932 [2024-07-26 05:22:53.020604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:33.932 [2024-07-26 05:22:53.020613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:33.932 [2024-07-26 05:22:53.020632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:33.932 [2024-07-26 05:22:53.020651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020661] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:33.932 [2024-07-26 05:22:53.020670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020688] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:33.932 [2024-07-26 05:22:53.020698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020717] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:33.932 [2024-07-26 05:22:53.020726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:33.932 [2024-07-26 05:22:53.020753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020771] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:33.932 [2024-07-26 05:22:53.020781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.932 [2024-07-26 05:22:53.020799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:33.932 [2024-07-26 05:22:53.020808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:33.932 [2024-07-26 05:22:53.020817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:33.932 [2024-07-26 05:22:53.020826] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:33.932 [2024-07-26 05:22:53.020836] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:33.932 [2024-07-26 05:22:53.020846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:33.932 [2024-07-26 05:22:53.020869] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:33.932 [2024-07-26 05:22:53.020879] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:33.932 [2024-07-26 05:22:53.020888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:33.932 [2024-07-26 05:22:53.020897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:33.932 [2024-07-26 05:22:53.020906] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:33.932 [2024-07-26 05:22:53.020916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:33.932 [2024-07-26 05:22:53.020927] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:33.932 [2024-07-26 05:22:53.020939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.932 [2024-07-26 05:22:53.020950] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:33.932 [2024-07-26 05:22:53.020961] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:33.932 [2024-07-26 05:22:53.020971] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:33.932 [2024-07-26 05:22:53.020983] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:33.932 [2024-07-26 05:22:53.020993] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:33.932 [2024-07-26 05:22:53.021004] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:33.932 [2024-07-26 05:22:53.021016] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:33.932 [2024-07-26 05:22:53.021026] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:33.932 [2024-07-26 05:22:53.021036] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:33.932 [2024-07-26 05:22:53.021047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:33.932 [2024-07-26 05:22:53.021058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:33.932 [2024-07-26 05:22:53.021069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:33.932 [2024-07-26 05:22:53.021080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:33.932 [2024-07-26 05:22:53.021090] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:33.932 [2024-07-26 05:22:53.021102] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:33.932 [2024-07-26 05:22:53.021114] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:33.932 [2024-07-26 05:22:53.021124] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:33.932 [2024-07-26 05:22:53.021135] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:33.932 [2024-07-26 05:22:53.021146] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:33.932 [2024-07-26 05:22:53.021156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.932 [2024-07-26 05:22:53.021166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:33.932 [2024-07-26 05:22:53.021176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:25:33.932 [2024-07-26 05:22:53.021186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.046226] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.046261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.192 [2024-07-26 05:22:53.046274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.986 ms 00:25:34.192 [2024-07-26 05:22:53.046285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.046364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.046380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:34.192 [2024-07-26 05:22:53.046390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:34.192 [2024-07-26 05:22:53.046400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.108359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.108393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.192 [2024-07-26 05:22:53.108407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.907 ms 00:25:34.192 [2024-07-26 05:22:53.108420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.108454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.108464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.192 [2024-07-26 05:22:53.108474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:34.192 [2024-07-26 05:22:53.108484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.108927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.108939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.192 [2024-07-26 05:22:53.108949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:25:34.192 [2024-07-26 05:22:53.108958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.109057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.109070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.192 [2024-07-26 05:22:53.109079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:34.192 [2024-07-26 05:22:53.109089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.131675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.131708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.192 [2024-07-26 05:22:53.131720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.566 ms 00:25:34.192 [2024-07-26 05:22:53.131747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.150948] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:34.192 [2024-07-26 05:22:53.150984] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:34.192 [2024-07-26 05:22:53.150998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.151008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:34.192 [2024-07-26 05:22:53.151019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.149 ms 00:25:34.192 [2024-07-26 05:22:53.151028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.192 [2024-07-26 05:22:53.181408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.192 [2024-07-26 05:22:53.181461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:34.192 [2024-07-26 05:22:53.181476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.339 ms 00:25:34.192 [2024-07-26 05:22:53.181502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.193 [2024-07-26 05:22:53.200425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.193 [2024-07-26 05:22:53.200458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:34.193 [2024-07-26 05:22:53.200470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.879 ms 00:25:34.193 [2024-07-26 05:22:53.200479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.193 [2024-07-26 05:22:53.218368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.193 [2024-07-26 05:22:53.218401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:34.193 [2024-07-26 05:22:53.218414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.853 ms 00:25:34.193 [2024-07-26 05:22:53.218423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.193 [2024-07-26 05:22:53.218900] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.193 [2024-07-26 05:22:53.218925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.193 [2024-07-26 05:22:53.218936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:25:34.193 [2024-07-26 05:22:53.218946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.310024] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.310083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:34.452 [2024-07-26 05:22:53.310099] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.059 ms 00:25:34.452 [2024-07-26 05:22:53.310110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.322599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:34.452 [2024-07-26 05:22:53.325548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.325578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:34.452 [2024-07-26 05:22:53.325593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:25:34.452 [2024-07-26 05:22:53.325604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.325689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.325706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:34.452 [2024-07-26 05:22:53.325717] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:34.452 [2024-07-26 05:22:53.325728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.326602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.326625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.452 [2024-07-26 05:22:53.326637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:25:34.452 [2024-07-26 05:22:53.326648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.328746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.328775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:34.452 [2024-07-26 05:22:53.328791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.079 ms 00:25:34.452 [2024-07-26 05:22:53.328801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.328832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.328843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.452 [2024-07-26 05:22:53.328853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:34.452 [2024-07-26 05:22:53.328868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.328905] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:34.452 [2024-07-26 05:22:53.328917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.328927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:34.452 [2024-07-26 05:22:53.328937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:34.452 [2024-07-26 05:22:53.328950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.367190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.367234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.452 [2024-07-26 05:22:53.367249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.219 ms 00:25:34.452 [2024-07-26 05:22:53.367259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.367329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.452 [2024-07-26 05:22:53.367348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.452 [2024-07-26 05:22:53.367359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:34.452 [2024-07-26 05:22:53.367369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.452 [2024-07-26 05:22:53.368579] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.662 ms, result 0 00:26:07.592  Copying: 32/1024 [MB] (32 MBps) Copying: 64/1024 [MB] (32 MBps) Copying: 97/1024 [MB] (32 MBps) Copying: 128/1024 [MB] (30 MBps) Copying: 159/1024 [MB] (31 MBps) Copying: 189/1024 [MB] (30 MBps) Copying: 220/1024 [MB] (30 MBps) Copying: 251/1024 [MB] (30 MBps) Copying: 281/1024 [MB] (30 MBps) Copying: 311/1024 [MB] (30 MBps) Copying: 341/1024 [MB] (30 MBps) Copying: 372/1024 [MB] (31 MBps) Copying: 403/1024 [MB] (30 MBps) Copying: 434/1024 [MB] (30 MBps) Copying: 464/1024 [MB] (30 MBps) Copying: 495/1024 [MB] (30 MBps) Copying: 524/1024 [MB] (29 MBps) Copying: 553/1024 [MB] (28 MBps) Copying: 584/1024 [MB] (31 MBps) Copying: 616/1024 [MB] (31 MBps) Copying: 649/1024 [MB] (32 MBps) Copying: 682/1024 [MB] (32 MBps) Copying: 712/1024 [MB] (30 MBps) Copying: 743/1024 [MB] (31 MBps) Copying: 774/1024 [MB] (30 MBps) Copying: 806/1024 [MB] (32 MBps) Copying: 839/1024 [MB] (32 MBps) Copying: 872/1024 [MB] (33 MBps) Copying: 904/1024 [MB] (32 MBps) Copying: 936/1024 [MB] (31 MBps) Copying: 968/1024 [MB] (32 MBps) Copying: 999/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-07-26 05:23:26.492635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.492718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:07.592 [2024-07-26 05:23:26.492743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:07.592 [2024-07-26 05:23:26.492760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.492796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:07.592 [2024-07-26 05:23:26.499060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.499113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:07.592 [2024-07-26 05:23:26.499133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.237 ms 00:26:07.592 [2024-07-26 05:23:26.499159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.499511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.499541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:07.592 [2024-07-26 05:23:26.499559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:07.592 [2024-07-26 05:23:26.499582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.503248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.503271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:07.592 [2024-07-26 05:23:26.503283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:26:07.592 [2024-07-26 05:23:26.503292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.508411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.508443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:07.592 [2024-07-26 05:23:26.508455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.095 ms 00:26:07.592 [2024-07-26 05:23:26.508465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.547384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.547426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:07.592 [2024-07-26 05:23:26.547441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.844 ms 00:26:07.592 [2024-07-26 05:23:26.547452] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.570164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.570214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:07.592 [2024-07-26 05:23:26.570229] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.670 ms 00:26:07.592 [2024-07-26 05:23:26.570240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.574093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.574138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:07.592 [2024-07-26 05:23:26.574152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:26:07.592 [2024-07-26 05:23:26.574163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.612935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.612973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:07.592 [2024-07-26 05:23:26.612987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.753 ms 00:26:07.592 [2024-07-26 05:23:26.612996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.650720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.650758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:07.592 [2024-07-26 05:23:26.650771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.685 ms 00:26:07.592 [2024-07-26 05:23:26.650781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.592 [2024-07-26 05:23:26.688229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.592 [2024-07-26 05:23:26.688266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:07.592 [2024-07-26 05:23:26.688279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.410 ms 00:26:07.592 [2024-07-26 05:23:26.688304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.853 [2024-07-26 05:23:26.726367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.853 [2024-07-26 05:23:26.726405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:07.853 [2024-07-26 05:23:26.726420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.975 ms 00:26:07.853 [2024-07-26 05:23:26.726429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.853 [2024-07-26 05:23:26.726467] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:07.853 [2024-07-26 05:23:26.726483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:07.853 [2024-07-26 05:23:26.726497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:26:07.853 [2024-07-26 05:23:26.726508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.726992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:07.853 [2024-07-26 05:23:26.727089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:07.854 [2024-07-26 05:23:26.727589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:07.854 [2024-07-26 05:23:26.727599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cec5690a-de13-4374-abd2-4c80df841bd5 00:26:07.854 [2024-07-26 05:23:26.727615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:26:07.854 [2024-07-26 05:23:26.727625] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:07.854 [2024-07-26 05:23:26.727635] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:07.854 [2024-07-26 05:23:26.727645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:07.854 [2024-07-26 05:23:26.727654] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:07.854 [2024-07-26 05:23:26.727665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:07.854 [2024-07-26 05:23:26.727674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:07.854 [2024-07-26 05:23:26.727684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:07.854 [2024-07-26 05:23:26.727693] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:07.854 [2024-07-26 05:23:26.727703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.854 [2024-07-26 05:23:26.727712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:07.854 [2024-07-26 05:23:26.727723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:26:07.854 [2024-07-26 05:23:26.727744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.747749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.854 [2024-07-26 05:23:26.747784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:07.854 [2024-07-26 05:23:26.747798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.955 ms 00:26:07.854 [2024-07-26 05:23:26.747808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.748068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.854 [2024-07-26 05:23:26.748080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:07.854 [2024-07-26 05:23:26.748096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:26:07.854 [2024-07-26 05:23:26.748106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.804926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.854 [2024-07-26 05:23:26.804969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:07.854 [2024-07-26 05:23:26.804990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.854 [2024-07-26 05:23:26.805001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.805063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.854 [2024-07-26 05:23:26.805074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:07.854 [2024-07-26 05:23:26.805091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.854 [2024-07-26 05:23:26.805100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.805183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.854 [2024-07-26 05:23:26.805195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:07.854 [2024-07-26 05:23:26.805222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.854 [2024-07-26 05:23:26.805233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.805252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.854 [2024-07-26 05:23:26.805263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:07.854 [2024-07-26 05:23:26.805273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.854 [2024-07-26 05:23:26.805287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.854 [2024-07-26 05:23:26.928872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.854 [2024-07-26 05:23:26.928925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.854 [2024-07-26 05:23:26.928940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.854 [2024-07-26 05:23:26.928951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.112 [2024-07-26 05:23:26.977397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.112 [2024-07-26 05:23:26.977470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:08.112 [2024-07-26 05:23:26.977497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.112 [2024-07-26 05:23:26.977514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.112 [2024-07-26 05:23:26.977594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.112 [2024-07-26 05:23:26.977605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:08.113 [2024-07-26 05:23:26.977616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.977626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.977669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.113 [2024-07-26 05:23:26.977680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:08.113 [2024-07-26 05:23:26.977690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.977700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.977809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.113 [2024-07-26 05:23:26.977823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:08.113 [2024-07-26 05:23:26.977833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.977843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.977878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.113 [2024-07-26 05:23:26.977890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:08.113 [2024-07-26 05:23:26.977900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.977910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.977951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.113 [2024-07-26 05:23:26.977962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:08.113 [2024-07-26 05:23:26.977972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.977982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.978027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:08.113 [2024-07-26 05:23:26.978038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:08.113 [2024-07-26 05:23:26.978048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:08.113 [2024-07-26 05:23:26.978059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.113 [2024-07-26 05:23:26.978176] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 485.520 ms, result 0 00:26:09.489 00:26:09.489 00:26:09.489 05:23:28 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:11.392 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@37 -- # killprocess 75918 00:26:11.392 05:23:30 -- common/autotest_common.sh@926 -- # '[' -z 75918 ']' 00:26:11.392 05:23:30 -- common/autotest_common.sh@930 -- # kill -0 75918 00:26:11.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75918) - No such process 00:26:11.392 Process with pid 75918 is not found 00:26:11.392 05:23:30 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75918 is not found' 00:26:11.392 05:23:30 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:11.650 Remove shared memory files 00:26:11.650 05:23:30 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:11.650 05:23:30 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:11.650 05:23:30 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:11.650 05:23:30 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:11.650 05:23:30 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:11.650 05:23:30 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:11.650 05:23:30 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:11.650 ************************************ 00:26:11.650 END TEST ftl_dirty_shutdown 00:26:11.650 ************************************ 00:26:11.650 00:26:11.650 real 3m14.519s 00:26:11.650 user 3m43.391s 00:26:11.650 sys 0m35.387s 00:26:11.651 05:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.651 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:11.651 05:23:30 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:11.651 05:23:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:26:11.651 05:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:11.651 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:11.651 ************************************ 00:26:11.651 START TEST ftl_upgrade_shutdown 00:26:11.651 ************************************ 00:26:11.651 05:23:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:11.910 * Looking for test storage... 00:26:11.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:11.910 05:23:30 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:11.910 05:23:30 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:11.910 05:23:30 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:11.910 05:23:30 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:11.910 05:23:30 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:11.910 05:23:30 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:11.910 05:23:30 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:11.910 05:23:30 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:11.910 05:23:30 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:11.910 05:23:30 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:11.910 05:23:30 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:11.910 05:23:30 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:11.910 05:23:30 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:11.910 05:23:30 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:11.910 05:23:30 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:11.910 05:23:30 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:11.910 05:23:30 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:11.910 05:23:30 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:11.910 05:23:30 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:11.910 05:23:30 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:11.910 05:23:30 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:11.910 05:23:30 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:11.910 05:23:30 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:11.910 05:23:30 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:11.910 05:23:30 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:11.910 05:23:30 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:11.910 05:23:30 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:11.910 05:23:30 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:11.910 05:23:30 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:11.910 05:23:30 -- ftl/common.sh@81 -- # local base_bdev= 00:26:11.910 05:23:30 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:11.910 05:23:30 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:11.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.910 05:23:30 -- ftl/common.sh@89 -- # spdk_tgt_pid=78001 00:26:11.910 05:23:30 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:11.910 05:23:30 -- ftl/common.sh@91 -- # waitforlisten 78001 00:26:11.910 05:23:30 -- common/autotest_common.sh@819 -- # '[' -z 78001 ']' 00:26:11.910 05:23:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.910 05:23:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:11.910 05:23:30 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:11.910 05:23:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.910 05:23:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:11.910 05:23:30 -- common/autotest_common.sh@10 -- # set +x 00:26:11.910 [2024-07-26 05:23:30.971796] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:11.910 [2024-07-26 05:23:30.972254] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78001 ] 00:26:12.170 [2024-07-26 05:23:31.164812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.429 [2024-07-26 05:23:31.481165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:12.429 [2024-07-26 05:23:31.481601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.806 05:23:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:13.806 05:23:32 -- common/autotest_common.sh@852 -- # return 0 00:26:13.806 05:23:32 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:13.806 05:23:32 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:13.806 05:23:32 -- ftl/common.sh@99 -- # local params 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:13.806 05:23:32 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:13.806 05:23:32 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:13.806 05:23:32 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:13.806 05:23:32 -- ftl/common.sh@54 -- # local name=base 00:26:13.806 05:23:32 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:13.806 05:23:32 -- ftl/common.sh@56 -- # local size=20480 00:26:13.806 05:23:32 -- ftl/common.sh@59 -- # local base_bdev 00:26:13.806 05:23:32 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:13.806 05:23:32 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:13.806 05:23:32 -- ftl/common.sh@62 -- # local base_size 00:26:13.806 05:23:32 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:13.806 05:23:32 -- common/autotest_common.sh@1357 -- # local bdev_name=basen1 00:26:13.806 05:23:32 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:13.806 05:23:32 -- common/autotest_common.sh@1359 -- # local bs 00:26:13.806 05:23:32 -- common/autotest_common.sh@1360 -- # local nb 00:26:13.806 05:23:32 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:14.065 05:23:33 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:14.065 { 00:26:14.065 "name": "basen1", 00:26:14.065 "aliases": [ 00:26:14.065 "31046ad4-5ed3-43ab-a0ce-334f427163f0" 00:26:14.065 ], 00:26:14.065 "product_name": "NVMe disk", 00:26:14.065 "block_size": 4096, 00:26:14.065 "num_blocks": 1310720, 00:26:14.065 "uuid": "31046ad4-5ed3-43ab-a0ce-334f427163f0", 00:26:14.065 "assigned_rate_limits": { 00:26:14.065 "rw_ios_per_sec": 0, 00:26:14.065 "rw_mbytes_per_sec": 0, 00:26:14.065 "r_mbytes_per_sec": 0, 00:26:14.065 "w_mbytes_per_sec": 0 00:26:14.065 }, 00:26:14.065 "claimed": true, 00:26:14.065 "claim_type": "read_many_write_one", 00:26:14.065 "zoned": false, 00:26:14.065 "supported_io_types": { 00:26:14.065 "read": true, 00:26:14.065 "write": true, 00:26:14.065 "unmap": true, 00:26:14.065 "write_zeroes": true, 00:26:14.065 "flush": true, 00:26:14.065 "reset": true, 00:26:14.065 "compare": true, 00:26:14.065 "compare_and_write": false, 00:26:14.065 "abort": true, 00:26:14.065 "nvme_admin": true, 00:26:14.065 "nvme_io": true 00:26:14.065 }, 00:26:14.065 "driver_specific": { 00:26:14.065 "nvme": [ 00:26:14.065 { 00:26:14.065 "pci_address": "0000:00:07.0", 00:26:14.065 "trid": { 00:26:14.065 "trtype": "PCIe", 00:26:14.065 "traddr": "0000:00:07.0" 00:26:14.065 }, 00:26:14.065 "ctrlr_data": { 00:26:14.065 "cntlid": 0, 00:26:14.065 "vendor_id": "0x1b36", 00:26:14.065 "model_number": "QEMU NVMe Ctrl", 00:26:14.065 "serial_number": "12341", 00:26:14.065 "firmware_revision": "8.0.0", 00:26:14.065 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:14.065 "oacs": { 00:26:14.065 "security": 0, 00:26:14.065 "format": 1, 00:26:14.065 "firmware": 0, 00:26:14.065 "ns_manage": 1 00:26:14.065 }, 00:26:14.065 "multi_ctrlr": false, 00:26:14.065 "ana_reporting": false 00:26:14.065 }, 00:26:14.065 "vs": { 00:26:14.065 "nvme_version": "1.4" 00:26:14.065 }, 00:26:14.066 "ns_data": { 00:26:14.066 "id": 1, 00:26:14.066 "can_share": false 00:26:14.066 } 00:26:14.066 } 00:26:14.066 ], 00:26:14.066 "mp_policy": "active_passive" 00:26:14.066 } 00:26:14.066 } 00:26:14.066 ]' 00:26:14.066 05:23:33 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:14.066 05:23:33 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:14.066 05:23:33 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:14.325 05:23:33 -- common/autotest_common.sh@1363 -- # nb=1310720 00:26:14.325 05:23:33 -- common/autotest_common.sh@1366 -- # bdev_size=5120 00:26:14.325 05:23:33 -- common/autotest_common.sh@1367 -- # echo 5120 00:26:14.325 05:23:33 -- ftl/common.sh@63 -- # base_size=5120 00:26:14.325 05:23:33 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:14.325 05:23:33 -- ftl/common.sh@67 -- # clear_lvols 00:26:14.325 05:23:33 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:14.325 05:23:33 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:14.325 05:23:33 -- ftl/common.sh@28 -- # stores=a34098b8-1337-4be7-844a-dc2f2b7e212b 00:26:14.325 05:23:33 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:14.325 05:23:33 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a34098b8-1337-4be7-844a-dc2f2b7e212b 00:26:14.583 05:23:33 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:14.840 05:23:33 -- ftl/common.sh@68 -- # lvs=a204e1e9-1c8a-4ec2-8656-fac2fa71ba32 00:26:14.840 05:23:33 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u a204e1e9-1c8a-4ec2-8656-fac2fa71ba32 00:26:15.099 05:23:34 -- ftl/common.sh@107 -- # base_bdev=f6044268-b5c2-415c-bdb6-26aed7012d5f 00:26:15.099 05:23:34 -- ftl/common.sh@108 -- # [[ -z f6044268-b5c2-415c-bdb6-26aed7012d5f ]] 00:26:15.099 05:23:34 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 f6044268-b5c2-415c-bdb6-26aed7012d5f 5120 00:26:15.099 05:23:34 -- ftl/common.sh@35 -- # local name=cache 00:26:15.099 05:23:34 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:26:15.099 05:23:34 -- ftl/common.sh@37 -- # local base_bdev=f6044268-b5c2-415c-bdb6-26aed7012d5f 00:26:15.099 05:23:34 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:15.099 05:23:34 -- ftl/common.sh@41 -- # get_bdev_size f6044268-b5c2-415c-bdb6-26aed7012d5f 00:26:15.099 05:23:34 -- common/autotest_common.sh@1357 -- # local bdev_name=f6044268-b5c2-415c-bdb6-26aed7012d5f 00:26:15.099 05:23:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:26:15.099 05:23:34 -- common/autotest_common.sh@1359 -- # local bs 00:26:15.099 05:23:34 -- common/autotest_common.sh@1360 -- # local nb 00:26:15.099 05:23:34 -- common/autotest_common.sh@1361 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f6044268-b5c2-415c-bdb6-26aed7012d5f 00:26:15.357 05:23:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:26:15.357 { 00:26:15.357 "name": "f6044268-b5c2-415c-bdb6-26aed7012d5f", 00:26:15.357 "aliases": [ 00:26:15.357 "lvs/basen1p0" 00:26:15.357 ], 00:26:15.357 "product_name": "Logical Volume", 00:26:15.357 "block_size": 4096, 00:26:15.357 "num_blocks": 5242880, 00:26:15.357 "uuid": "f6044268-b5c2-415c-bdb6-26aed7012d5f", 00:26:15.357 "assigned_rate_limits": { 00:26:15.357 "rw_ios_per_sec": 0, 00:26:15.357 "rw_mbytes_per_sec": 0, 00:26:15.357 "r_mbytes_per_sec": 0, 00:26:15.357 "w_mbytes_per_sec": 0 00:26:15.357 }, 00:26:15.357 "claimed": false, 00:26:15.357 "zoned": false, 00:26:15.357 "supported_io_types": { 00:26:15.357 "read": true, 00:26:15.357 "write": true, 00:26:15.357 "unmap": true, 00:26:15.357 "write_zeroes": true, 00:26:15.357 "flush": false, 00:26:15.357 "reset": true, 00:26:15.357 "compare": false, 00:26:15.357 "compare_and_write": false, 00:26:15.357 "abort": false, 00:26:15.357 "nvme_admin": false, 00:26:15.357 "nvme_io": false 00:26:15.357 }, 00:26:15.357 "driver_specific": { 00:26:15.357 "lvol": { 00:26:15.357 "lvol_store_uuid": "a204e1e9-1c8a-4ec2-8656-fac2fa71ba32", 00:26:15.357 "base_bdev": "basen1", 00:26:15.357 "thin_provision": true, 00:26:15.357 "snapshot": false, 00:26:15.357 "clone": false, 00:26:15.357 "esnap_clone": false 00:26:15.357 } 00:26:15.357 } 00:26:15.357 } 00:26:15.357 ]' 00:26:15.357 05:23:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:26:15.357 05:23:34 -- common/autotest_common.sh@1362 -- # bs=4096 00:26:15.357 05:23:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:26:15.357 05:23:34 -- common/autotest_common.sh@1363 -- # nb=5242880 00:26:15.357 05:23:34 -- common/autotest_common.sh@1366 -- # bdev_size=20480 00:26:15.357 05:23:34 -- common/autotest_common.sh@1367 -- # echo 20480 00:26:15.357 05:23:34 -- ftl/common.sh@41 -- # local base_size=1024 00:26:15.357 05:23:34 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:15.357 05:23:34 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:26:15.615 05:23:34 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:15.615 05:23:34 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:15.615 05:23:34 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:15.873 05:23:34 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:15.873 05:23:34 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:15.873 05:23:34 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f6044268-b5c2-415c-bdb6-26aed7012d5f -c cachen1p0 --l2p_dram_limit 2 00:26:16.133 [2024-07-26 05:23:35.067629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.067681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:16.133 [2024-07-26 05:23:35.067700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:16.133 [2024-07-26 05:23:35.067710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.067792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.067804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:16.133 [2024-07-26 05:23:35.067817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:16.133 [2024-07-26 05:23:35.067827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.067852] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:16.133 [2024-07-26 05:23:35.069047] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:16.133 [2024-07-26 05:23:35.069083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.069094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:16.133 [2024-07-26 05:23:35.069109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.233 ms 00:26:16.133 [2024-07-26 05:23:35.069119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.069196] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 414ebf38-7d95-4a7a-9a1a-529f611a254d 00:26:16.133 [2024-07-26 05:23:35.070651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.070688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:16.133 [2024-07-26 05:23:35.070700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:16.133 [2024-07-26 05:23:35.070713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.078247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.078283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:16.133 [2024-07-26 05:23:35.078295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.487 ms 00:26:16.133 [2024-07-26 05:23:35.078323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.078367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.078383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:16.133 [2024-07-26 05:23:35.078404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:16.133 [2024-07-26 05:23:35.078419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.078488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.078502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:16.133 [2024-07-26 05:23:35.078513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:16.133 [2024-07-26 05:23:35.078528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.078568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:16.133 [2024-07-26 05:23:35.084520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.084554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:16.133 [2024-07-26 05:23:35.084568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.960 ms 00:26:16.133 [2024-07-26 05:23:35.084593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.084629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.084639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:16.133 [2024-07-26 05:23:35.084652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:16.133 [2024-07-26 05:23:35.084661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.084699] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:16.133 [2024-07-26 05:23:35.084805] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:16.133 [2024-07-26 05:23:35.084823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:16.133 [2024-07-26 05:23:35.084836] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:16.133 [2024-07-26 05:23:35.084851] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:16.133 [2024-07-26 05:23:35.084863] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:16.133 [2024-07-26 05:23:35.084876] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:16.133 [2024-07-26 05:23:35.084886] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:16.133 [2024-07-26 05:23:35.084897] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:16.133 [2024-07-26 05:23:35.084911] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:16.133 [2024-07-26 05:23:35.084923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.084933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:16.133 [2024-07-26 05:23:35.084946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:26:16.133 [2024-07-26 05:23:35.084955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.085013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.133 [2024-07-26 05:23:35.085024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:16.133 [2024-07-26 05:23:35.085047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:16.133 [2024-07-26 05:23:35.085057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.133 [2024-07-26 05:23:35.085128] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:16.133 [2024-07-26 05:23:35.085139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:16.133 [2024-07-26 05:23:35.085152] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:16.133 [2024-07-26 05:23:35.085182] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:16.133 [2024-07-26 05:23:35.085203] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:16.133 [2024-07-26 05:23:35.085214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:16.133 [2024-07-26 05:23:35.085246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:16.133 [2024-07-26 05:23:35.085269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:16.133 [2024-07-26 05:23:35.085283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085292] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:16.133 [2024-07-26 05:23:35.085308] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:16.133 [2024-07-26 05:23:35.085349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:16.133 [2024-07-26 05:23:35.085360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.133 [2024-07-26 05:23:35.085369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:16.133 [2024-07-26 05:23:35.085380] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:16.133 [2024-07-26 05:23:35.085390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:16.133 [2024-07-26 05:23:35.085420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:16.133 [2024-07-26 05:23:35.085448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:16.133 [2024-07-26 05:23:35.085468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:16.133 [2024-07-26 05:23:35.085477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:16.133 [2024-07-26 05:23:35.085498] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:16.133 [2024-07-26 05:23:35.085509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:16.133 [2024-07-26 05:23:35.085532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:16.133 [2024-07-26 05:23:35.085541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:16.133 [2024-07-26 05:23:35.085553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:16.133 [2024-07-26 05:23:35.085562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:16.133 [2024-07-26 05:23:35.085574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.134 [2024-07-26 05:23:35.085583] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:16.134 [2024-07-26 05:23:35.085595] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:16.134 [2024-07-26 05:23:35.085604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.134 [2024-07-26 05:23:35.085615] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:16.134 [2024-07-26 05:23:35.085625] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:16.134 [2024-07-26 05:23:35.085637] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:16.134 [2024-07-26 05:23:35.085647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:16.134 [2024-07-26 05:23:35.085659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:16.134 [2024-07-26 05:23:35.085669] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:16.134 [2024-07-26 05:23:35.085680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:16.134 [2024-07-26 05:23:35.085689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:16.134 [2024-07-26 05:23:35.085704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:16.134 [2024-07-26 05:23:35.085714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:16.134 [2024-07-26 05:23:35.085727] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:16.134 [2024-07-26 05:23:35.085739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:16.134 [2024-07-26 05:23:35.085767] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085790] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:16.134 [2024-07-26 05:23:35.085803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:16.134 [2024-07-26 05:23:35.085813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:16.134 [2024-07-26 05:23:35.085826] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:16.134 [2024-07-26 05:23:35.085837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085872] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085882] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:16.134 [2024-07-26 05:23:35.085899] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:16.134 [2024-07-26 05:23:35.085910] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:16.134 [2024-07-26 05:23:35.085923] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085934] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.134 [2024-07-26 05:23:35.085946] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:16.134 [2024-07-26 05:23:35.085957] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:16.134 [2024-07-26 05:23:35.085969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:16.134 [2024-07-26 05:23:35.085980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.085992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:16.134 [2024-07-26 05:23:35.086003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.891 ms 00:26:16.134 [2024-07-26 05:23:35.086015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.110350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.110395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:16.134 [2024-07-26 05:23:35.110409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.291 ms 00:26:16.134 [2024-07-26 05:23:35.110421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.110462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.110477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:16.134 [2024-07-26 05:23:35.110487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:16.134 [2024-07-26 05:23:35.110498] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.162273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.162308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:16.134 [2024-07-26 05:23:35.162320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 51.720 ms 00:26:16.134 [2024-07-26 05:23:35.162348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.162380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.162397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:16.134 [2024-07-26 05:23:35.162407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:16.134 [2024-07-26 05:23:35.162419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.162879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.162897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:16.134 [2024-07-26 05:23:35.162907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 00:26:16.134 [2024-07-26 05:23:35.162918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.162953] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.162978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:16.134 [2024-07-26 05:23:35.162988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:16.134 [2024-07-26 05:23:35.162999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.186565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.186600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:16.134 [2024-07-26 05:23:35.186612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.547 ms 00:26:16.134 [2024-07-26 05:23:35.186625] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.199919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:16.134 [2024-07-26 05:23:35.200997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.201029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:16.134 [2024-07-26 05:23:35.201044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.282 ms 00:26:16.134 [2024-07-26 05:23:35.201054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.234241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.134 [2024-07-26 05:23:35.234285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:16.134 [2024-07-26 05:23:35.234302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 33.152 ms 00:26:16.134 [2024-07-26 05:23:35.234312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.134 [2024-07-26 05:23:35.234361] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:16.134 [2024-07-26 05:23:35.234376] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:19.417 [2024-07-26 05:23:38.082905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.082965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:19.417 [2024-07-26 05:23:38.082999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2848.525 ms 00:26:19.417 [2024-07-26 05:23:38.083010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.417 [2024-07-26 05:23:38.083108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.083120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:19.417 [2024-07-26 05:23:38.083133] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:19.417 [2024-07-26 05:23:38.083144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.417 [2024-07-26 05:23:38.119820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.119857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:19.417 [2024-07-26 05:23:38.119890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.615 ms 00:26:19.417 [2024-07-26 05:23:38.119900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.417 [2024-07-26 05:23:38.157503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.157541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:19.417 [2024-07-26 05:23:38.157561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 37.555 ms 00:26:19.417 [2024-07-26 05:23:38.157571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.417 [2024-07-26 05:23:38.158030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.158044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:19.417 [2024-07-26 05:23:38.158057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:26:19.417 [2024-07-26 05:23:38.158068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.417 [2024-07-26 05:23:38.251411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.417 [2024-07-26 05:23:38.251453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:19.417 [2024-07-26 05:23:38.251470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 93.288 ms 00:26:19.417 [2024-07-26 05:23:38.251481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.288496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.418 [2024-07-26 05:23:38.288536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:19.418 [2024-07-26 05:23:38.288553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.971 ms 00:26:19.418 [2024-07-26 05:23:38.288567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.290693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.418 [2024-07-26 05:23:38.290721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:19.418 [2024-07-26 05:23:38.290739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.083 ms 00:26:19.418 [2024-07-26 05:23:38.290749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.327589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.418 [2024-07-26 05:23:38.327623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:19.418 [2024-07-26 05:23:38.327638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.783 ms 00:26:19.418 [2024-07-26 05:23:38.327647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.327691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.418 [2024-07-26 05:23:38.327702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:19.418 [2024-07-26 05:23:38.327715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:19.418 [2024-07-26 05:23:38.327724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.327819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:19.418 [2024-07-26 05:23:38.327831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:19.418 [2024-07-26 05:23:38.327846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:19.418 [2024-07-26 05:23:38.327855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:19.418 [2024-07-26 05:23:38.329083] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3260.975 ms, result 0 00:26:19.418 { 00:26:19.418 "name": "ftl", 00:26:19.418 "uuid": "414ebf38-7d95-4a7a-9a1a-529f611a254d" 00:26:19.418 } 00:26:19.418 05:23:38 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:19.676 [2024-07-26 05:23:38.556112] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.676 05:23:38 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:19.934 05:23:38 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:19.934 [2024-07-26 05:23:38.960509] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:19.934 05:23:38 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:20.192 [2024-07-26 05:23:39.119042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:20.192 05:23:39 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:20.450 Fill FTL, iteration 1 00:26:20.450 05:23:39 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:20.450 05:23:39 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:20.450 05:23:39 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:20.450 05:23:39 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:20.450 05:23:39 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:20.450 05:23:39 -- ftl/common.sh@163 -- # spdk_ini_pid=78125 00:26:20.450 05:23:39 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:20.450 05:23:39 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:20.450 05:23:39 -- ftl/common.sh@165 -- # waitforlisten 78125 /var/tmp/spdk.tgt.sock 00:26:20.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:20.451 05:23:39 -- common/autotest_common.sh@819 -- # '[' -z 78125 ']' 00:26:20.451 05:23:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:20.451 05:23:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:20.451 05:23:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:20.451 05:23:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:20.451 05:23:39 -- common/autotest_common.sh@10 -- # set +x 00:26:20.451 [2024-07-26 05:23:39.516837] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:20.451 [2024-07-26 05:23:39.516946] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78125 ] 00:26:20.709 [2024-07-26 05:23:39.675987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.967 [2024-07-26 05:23:39.904803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:20.967 [2024-07-26 05:23:39.905013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.901 05:23:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:21.901 05:23:40 -- common/autotest_common.sh@852 -- # return 0 00:26:21.902 05:23:40 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:22.160 ftln1 00:26:22.160 05:23:41 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:22.160 05:23:41 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:22.418 05:23:41 -- ftl/common.sh@173 -- # echo ']}' 00:26:22.418 05:23:41 -- ftl/common.sh@176 -- # killprocess 78125 00:26:22.418 05:23:41 -- common/autotest_common.sh@926 -- # '[' -z 78125 ']' 00:26:22.418 05:23:41 -- common/autotest_common.sh@930 -- # kill -0 78125 00:26:22.418 05:23:41 -- common/autotest_common.sh@931 -- # uname 00:26:22.418 05:23:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:22.418 05:23:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78125 00:26:22.418 killing process with pid 78125 00:26:22.418 05:23:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:22.418 05:23:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:22.418 05:23:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78125' 00:26:22.418 05:23:41 -- common/autotest_common.sh@945 -- # kill 78125 00:26:22.418 05:23:41 -- common/autotest_common.sh@950 -- # wait 78125 00:26:24.951 05:23:43 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:24.951 05:23:43 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:24.951 [2024-07-26 05:23:43.923571] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:24.952 [2024-07-26 05:23:43.923725] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78186 ] 00:26:25.213 [2024-07-26 05:23:44.104990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.471 [2024-07-26 05:23:44.342849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.365  Copying: 250/1024 [MB] (250 MBps) Copying: 498/1024 [MB] (248 MBps) Copying: 746/1024 [MB] (248 MBps) Copying: 987/1024 [MB] (241 MBps) Copying: 1024/1024 [MB] (average 246 MBps) 00:26:31.365 00:26:31.365 05:23:50 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:31.365 Calculate MD5 checksum, iteration 1 00:26:31.365 05:23:50 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:31.365 05:23:50 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.365 05:23:50 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:31.365 05:23:50 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:31.365 05:23:50 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:31.365 05:23:50 -- ftl/common.sh@154 -- # return 0 00:26:31.365 05:23:50 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.365 [2024-07-26 05:23:50.412582] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:31.365 [2024-07-26 05:23:50.412970] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78259 ] 00:26:31.624 [2024-07-26 05:23:50.594324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.883 [2024-07-26 05:23:50.821274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.202  Copying: 683/1024 [MB] (683 MBps) Copying: 1024/1024 [MB] (average 673 MBps) 00:26:35.202 00:26:35.202 05:23:54 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:35.202 05:23:54 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:37.108 05:23:55 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:37.108 Fill FTL, iteration 2 00:26:37.108 05:23:55 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d660f06e13079e48fcb9125df20a1d44 00:26:37.108 05:23:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:37.108 05:23:55 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:37.108 05:23:55 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:37.109 05:23:55 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:37.109 05:23:55 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:37.109 05:23:55 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:37.109 05:23:55 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:37.109 05:23:55 -- ftl/common.sh@154 -- # return 0 00:26:37.109 05:23:55 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:37.109 [2024-07-26 05:23:55.907588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:37.109 [2024-07-26 05:23:55.907699] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78316 ] 00:26:37.109 [2024-07-26 05:23:56.072873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.368 [2024-07-26 05:23:56.348814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.639  Copying: 239/1024 [MB] (239 MBps) Copying: 479/1024 [MB] (240 MBps) Copying: 719/1024 [MB] (240 MBps) Copying: 963/1024 [MB] (244 MBps) Copying: 1024/1024 [MB] (average 239 MBps) 00:26:43.639 00:26:43.639 Calculate MD5 checksum, iteration 2 00:26:43.639 05:24:02 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:43.639 05:24:02 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:43.639 05:24:02 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:43.639 05:24:02 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:43.639 05:24:02 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:43.639 05:24:02 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:43.639 05:24:02 -- ftl/common.sh@154 -- # return 0 00:26:43.639 05:24:02 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:43.639 [2024-07-26 05:24:02.522370] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:43.639 [2024-07-26 05:24:02.522525] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78386 ] 00:26:43.639 [2024-07-26 05:24:02.702376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.898 [2024-07-26 05:24:02.929542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.266  Copying: 621/1024 [MB] (621 MBps) Copying: 1024/1024 [MB] (average 646 MBps) 00:26:48.266 00:26:48.266 05:24:07 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:48.266 05:24:07 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:50.168 05:24:08 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:50.168 05:24:08 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=66c8792160fdfee5cd3fa29219c5d0ee 00:26:50.168 05:24:08 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:50.168 05:24:08 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:50.168 05:24:08 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:50.168 [2024-07-26 05:24:09.061471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.168 [2024-07-26 05:24:09.061523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:50.168 [2024-07-26 05:24:09.061539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:50.168 [2024-07-26 05:24:09.061565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.168 [2024-07-26 05:24:09.061593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.168 [2024-07-26 05:24:09.061604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:50.168 [2024-07-26 05:24:09.061614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:50.168 [2024-07-26 05:24:09.061624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.168 [2024-07-26 05:24:09.061649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.168 [2024-07-26 05:24:09.061660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:50.168 [2024-07-26 05:24:09.061670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.168 [2024-07-26 05:24:09.061680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.168 [2024-07-26 05:24:09.061742] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.289 ms, result 0 00:26:50.168 true 00:26:50.168 05:24:09 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:50.427 { 00:26:50.427 "name": "ftl", 00:26:50.427 "properties": [ 00:26:50.427 { 00:26:50.427 "name": "superblock_version", 00:26:50.427 "value": 5, 00:26:50.427 "read-only": true 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "name": "base_device", 00:26:50.427 "bands": [ 00:26:50.427 { 00:26:50.427 "id": 0, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 1, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 2, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 3, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 4, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 5, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 6, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 7, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 8, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 9, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 10, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 11, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 12, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 13, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 14, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 15, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 16, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 17, 00:26:50.427 "state": "FREE", 00:26:50.427 "validity": 0.0 00:26:50.427 } 00:26:50.427 ], 00:26:50.427 "read-only": true 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "name": "cache_device", 00:26:50.427 "type": "bdev", 00:26:50.427 "chunks": [ 00:26:50.427 { 00:26:50.427 "id": 0, 00:26:50.427 "state": "CLOSED", 00:26:50.427 "utilization": 1.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 1, 00:26:50.427 "state": "CLOSED", 00:26:50.427 "utilization": 1.0 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 2, 00:26:50.427 "state": "OPEN", 00:26:50.427 "utilization": 0.001953125 00:26:50.427 }, 00:26:50.427 { 00:26:50.427 "id": 3, 00:26:50.427 "state": "OPEN", 00:26:50.427 "utilization": 0.0 00:26:50.427 } 00:26:50.427 ], 00:26:50.428 "read-only": true 00:26:50.428 }, 00:26:50.428 { 00:26:50.428 "name": "verbose_mode", 00:26:50.428 "value": true, 00:26:50.428 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:50.428 }, 00:26:50.428 { 00:26:50.428 "name": "prep_upgrade_on_shutdown", 00:26:50.428 "value": false, 00:26:50.428 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:50.428 } 00:26:50.428 ] 00:26:50.428 } 00:26:50.428 05:24:09 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:50.685 [2024-07-26 05:24:09.564959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.685 [2024-07-26 05:24:09.565188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:50.686 [2024-07-26 05:24:09.565320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:50.686 [2024-07-26 05:24:09.565363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.686 [2024-07-26 05:24:09.565443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.686 [2024-07-26 05:24:09.565516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:50.686 [2024-07-26 05:24:09.565552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:50.686 [2024-07-26 05:24:09.565585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.686 [2024-07-26 05:24:09.565634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.686 [2024-07-26 05:24:09.565648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:50.686 [2024-07-26 05:24:09.565660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.686 [2024-07-26 05:24:09.565671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.686 [2024-07-26 05:24:09.565744] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.766 ms, result 0 00:26:50.686 true 00:26:50.686 05:24:09 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:50.686 05:24:09 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:50.686 05:24:09 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:50.945 05:24:09 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:50.945 05:24:09 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:50.945 05:24:09 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:50.945 [2024-07-26 05:24:10.021396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.945 [2024-07-26 05:24:10.021595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:50.945 [2024-07-26 05:24:10.021699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:50.945 [2024-07-26 05:24:10.021739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.945 [2024-07-26 05:24:10.021806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.945 [2024-07-26 05:24:10.021866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:50.945 [2024-07-26 05:24:10.021950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:50.945 [2024-07-26 05:24:10.021988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.945 [2024-07-26 05:24:10.022040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:50.945 [2024-07-26 05:24:10.022075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:50.945 [2024-07-26 05:24:10.022153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:50.945 [2024-07-26 05:24:10.022190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:50.945 [2024-07-26 05:24:10.022334] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.917 ms, result 0 00:26:50.945 true 00:26:50.945 05:24:10 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:51.204 { 00:26:51.204 "name": "ftl", 00:26:51.204 "properties": [ 00:26:51.204 { 00:26:51.204 "name": "superblock_version", 00:26:51.204 "value": 5, 00:26:51.204 "read-only": true 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "base_device", 00:26:51.204 "bands": [ 00:26:51.204 { 00:26:51.204 "id": 0, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 1, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 2, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 3, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 4, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 5, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 6, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 7, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 8, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 9, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 10, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 11, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 12, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 13, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 14, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 15, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 16, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 17, 00:26:51.204 "state": "FREE", 00:26:51.204 "validity": 0.0 00:26:51.204 } 00:26:51.204 ], 00:26:51.204 "read-only": true 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "cache_device", 00:26:51.204 "type": "bdev", 00:26:51.204 "chunks": [ 00:26:51.204 { 00:26:51.204 "id": 0, 00:26:51.204 "state": "CLOSED", 00:26:51.204 "utilization": 1.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 1, 00:26:51.204 "state": "CLOSED", 00:26:51.204 "utilization": 1.0 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 2, 00:26:51.204 "state": "OPEN", 00:26:51.204 "utilization": 0.001953125 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "id": 3, 00:26:51.204 "state": "OPEN", 00:26:51.204 "utilization": 0.0 00:26:51.204 } 00:26:51.204 ], 00:26:51.204 "read-only": true 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "verbose_mode", 00:26:51.204 "value": true, 00:26:51.204 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:51.204 }, 00:26:51.204 { 00:26:51.204 "name": "prep_upgrade_on_shutdown", 00:26:51.204 "value": true, 00:26:51.204 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:51.204 } 00:26:51.204 ] 00:26:51.204 } 00:26:51.204 05:24:10 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:51.204 05:24:10 -- ftl/common.sh@130 -- # [[ -n 78001 ]] 00:26:51.204 05:24:10 -- ftl/common.sh@131 -- # killprocess 78001 00:26:51.204 05:24:10 -- common/autotest_common.sh@926 -- # '[' -z 78001 ']' 00:26:51.204 05:24:10 -- common/autotest_common.sh@930 -- # kill -0 78001 00:26:51.204 05:24:10 -- common/autotest_common.sh@931 -- # uname 00:26:51.204 05:24:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:51.204 05:24:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78001 00:26:51.204 killing process with pid 78001 00:26:51.204 05:24:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:51.204 05:24:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:51.204 05:24:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78001' 00:26:51.204 05:24:10 -- common/autotest_common.sh@945 -- # kill 78001 00:26:51.204 05:24:10 -- common/autotest_common.sh@950 -- # wait 78001 00:26:52.581 [2024-07-26 05:24:11.366098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:52.581 [2024-07-26 05:24:11.384651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.581 [2024-07-26 05:24:11.384692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:52.581 [2024-07-26 05:24:11.384708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:52.581 [2024-07-26 05:24:11.384718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:52.581 [2024-07-26 05:24:11.384745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:52.581 [2024-07-26 05:24:11.388262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:52.581 [2024-07-26 05:24:11.388288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:52.581 [2024-07-26 05:24:11.388300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.500 ms 00:26:52.581 [2024-07-26 05:24:11.388311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.794286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.794339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:00.711 [2024-07-26 05:24:18.794356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7405.900 ms 00:27:00.711 [2024-07-26 05:24:18.794367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.795490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.795516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:00.711 [2024-07-26 05:24:18.795528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.103 ms 00:27:00.711 [2024-07-26 05:24:18.795539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.796500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.796522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:00.711 [2024-07-26 05:24:18.796534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:27:00.711 [2024-07-26 05:24:18.796544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.812864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.812902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:00.711 [2024-07-26 05:24:18.812915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.272 ms 00:27:00.711 [2024-07-26 05:24:18.812924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.822925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.822963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:00.711 [2024-07-26 05:24:18.822998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.966 ms 00:27:00.711 [2024-07-26 05:24:18.823008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.823106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.823120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:00.711 [2024-07-26 05:24:18.823131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:27:00.711 [2024-07-26 05:24:18.823141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.838275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.838308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:00.711 [2024-07-26 05:24:18.838320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.116 ms 00:27:00.711 [2024-07-26 05:24:18.838330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.854152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.854185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:00.711 [2024-07-26 05:24:18.854197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.788 ms 00:27:00.711 [2024-07-26 05:24:18.854217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.870181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.870223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:00.711 [2024-07-26 05:24:18.870236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.928 ms 00:27:00.711 [2024-07-26 05:24:18.870246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.885632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.711 [2024-07-26 05:24:18.885665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:00.711 [2024-07-26 05:24:18.885677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.318 ms 00:27:00.711 [2024-07-26 05:24:18.885686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.711 [2024-07-26 05:24:18.885720] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:00.711 [2024-07-26 05:24:18.885736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.711 [2024-07-26 05:24:18.885749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:00.711 [2024-07-26 05:24:18.885760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:00.711 [2024-07-26 05:24:18.885772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.711 [2024-07-26 05:24:18.885938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:00.711 [2024-07-26 05:24:18.885948] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 414ebf38-7d95-4a7a-9a1a-529f611a254d 00:27:00.711 [2024-07-26 05:24:18.885976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:00.711 [2024-07-26 05:24:18.885986] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:00.711 [2024-07-26 05:24:18.885995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:00.711 [2024-07-26 05:24:18.886006] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:00.711 [2024-07-26 05:24:18.886016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:00.711 [2024-07-26 05:24:18.886026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:00.711 [2024-07-26 05:24:18.886036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:00.711 [2024-07-26 05:24:18.886045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:00.711 [2024-07-26 05:24:18.886055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:00.712 [2024-07-26 05:24:18.886065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.712 [2024-07-26 05:24:18.886075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:00.712 [2024-07-26 05:24:18.886087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:27:00.712 [2024-07-26 05:24:18.886097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.906627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.712 [2024-07-26 05:24:18.906661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:00.712 [2024-07-26 05:24:18.906673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.509 ms 00:27:00.712 [2024-07-26 05:24:18.906700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.907009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.712 [2024-07-26 05:24:18.907021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:00.712 [2024-07-26 05:24:18.907032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:27:00.712 [2024-07-26 05:24:18.907048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.974919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:18.974955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:00.712 [2024-07-26 05:24:18.974968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:18.974993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.975031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:18.975041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:00.712 [2024-07-26 05:24:18.975051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:18.975069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.975142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:18.975155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:00.712 [2024-07-26 05:24:18.975165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:18.975175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:18.975194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:18.975204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:00.712 [2024-07-26 05:24:18.975214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:18.975239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.092894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.092952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:00.712 [2024-07-26 05:24:19.092967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.092978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.137695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.137746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:00.712 [2024-07-26 05:24:19.137759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.137770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.137857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.137869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:00.712 [2024-07-26 05:24:19.137879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.137888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.137931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.137951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:00.712 [2024-07-26 05:24:19.137961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.137970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.138075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.138092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:00.712 [2024-07-26 05:24:19.138102] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.138112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.138146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.138158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:00.712 [2024-07-26 05:24:19.138168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.138178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.138253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.138270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:00.712 [2024-07-26 05:24:19.138281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.138291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.138335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:00.712 [2024-07-26 05:24:19.138347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:00.712 [2024-07-26 05:24:19.138357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:00.712 [2024-07-26 05:24:19.138368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.712 [2024-07-26 05:24:19.138517] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7753.759 ms, result 0 00:27:05.980 05:24:24 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:05.980 05:24:24 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:05.980 05:24:24 -- ftl/common.sh@81 -- # local base_bdev= 00:27:05.980 05:24:24 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:05.980 05:24:24 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:05.980 05:24:24 -- ftl/common.sh@89 -- # spdk_tgt_pid=78627 00:27:05.980 05:24:24 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:05.980 05:24:24 -- ftl/common.sh@91 -- # waitforlisten 78627 00:27:05.980 05:24:24 -- common/autotest_common.sh@819 -- # '[' -z 78627 ']' 00:27:05.980 05:24:24 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:05.980 05:24:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.980 05:24:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:05.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.980 05:24:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.980 05:24:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:05.980 05:24:24 -- common/autotest_common.sh@10 -- # set +x 00:27:05.980 [2024-07-26 05:24:24.158535] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:05.980 [2024-07-26 05:24:24.158740] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78627 ] 00:27:05.980 [2024-07-26 05:24:24.346127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.980 [2024-07-26 05:24:24.604980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:05.980 [2024-07-26 05:24:24.605201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.912 [2024-07-26 05:24:25.752602] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:06.912 [2024-07-26 05:24:25.752692] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:06.912 [2024-07-26 05:24:25.895111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.912 [2024-07-26 05:24:25.895165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:06.912 [2024-07-26 05:24:25.895184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:06.912 [2024-07-26 05:24:25.895196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.912 [2024-07-26 05:24:25.895275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.912 [2024-07-26 05:24:25.895300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:06.912 [2024-07-26 05:24:25.895313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:06.912 [2024-07-26 05:24:25.895330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.912 [2024-07-26 05:24:25.895356] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:06.912 [2024-07-26 05:24:25.896432] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:06.912 [2024-07-26 05:24:25.896463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.912 [2024-07-26 05:24:25.896479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:06.912 [2024-07-26 05:24:25.896491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:27:06.912 [2024-07-26 05:24:25.896502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.912 [2024-07-26 05:24:25.899025] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:06.913 [2024-07-26 05:24:25.919775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.919818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:06.913 [2024-07-26 05:24:25.919835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.751 ms 00:27:06.913 [2024-07-26 05:24:25.919847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.919921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.919935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:06.913 [2024-07-26 05:24:25.919954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:06.913 [2024-07-26 05:24:25.919966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.933221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.933253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:06.913 [2024-07-26 05:24:25.933268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.173 ms 00:27:06.913 [2024-07-26 05:24:25.933281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.933338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.933354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:06.913 [2024-07-26 05:24:25.933366] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:06.913 [2024-07-26 05:24:25.933379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.933451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.933465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:06.913 [2024-07-26 05:24:25.933477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:06.913 [2024-07-26 05:24:25.933489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.933527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:06.913 [2024-07-26 05:24:25.940175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.940218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:06.913 [2024-07-26 05:24:25.940233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.660 ms 00:27:06.913 [2024-07-26 05:24:25.940244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.940287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.940300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:06.913 [2024-07-26 05:24:25.940313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:06.913 [2024-07-26 05:24:25.940324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.940369] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:06.913 [2024-07-26 05:24:25.940400] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:06.913 [2024-07-26 05:24:25.940438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:06.913 [2024-07-26 05:24:25.940471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:06.913 [2024-07-26 05:24:25.940546] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:06.913 [2024-07-26 05:24:25.940562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:06.913 [2024-07-26 05:24:25.940577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:06.913 [2024-07-26 05:24:25.940591] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:06.913 [2024-07-26 05:24:25.940605] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:06.913 [2024-07-26 05:24:25.940617] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:06.913 [2024-07-26 05:24:25.940630] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:06.913 [2024-07-26 05:24:25.940641] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:06.913 [2024-07-26 05:24:25.940659] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:06.913 [2024-07-26 05:24:25.940671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.940686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:06.913 [2024-07-26 05:24:25.940698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:27:06.913 [2024-07-26 05:24:25.940709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.940770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.913 [2024-07-26 05:24:25.940782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:06.913 [2024-07-26 05:24:25.940794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:06.913 [2024-07-26 05:24:25.940805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.913 [2024-07-26 05:24:25.940883] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:06.913 [2024-07-26 05:24:25.940897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:06.913 [2024-07-26 05:24:25.940914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:06.913 [2024-07-26 05:24:25.940926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.940937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:06.913 [2024-07-26 05:24:25.940947] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.940958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:06.913 [2024-07-26 05:24:25.940970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:06.913 [2024-07-26 05:24:25.940981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:06.913 [2024-07-26 05:24:25.940992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:06.913 [2024-07-26 05:24:25.941013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:06.913 [2024-07-26 05:24:25.941023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:06.913 [2024-07-26 05:24:25.941045] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941066] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:06.913 [2024-07-26 05:24:25.941076] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:06.913 [2024-07-26 05:24:25.941086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:06.913 [2024-07-26 05:24:25.941107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:06.913 [2024-07-26 05:24:25.941117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:06.913 [2024-07-26 05:24:25.941137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:06.913 [2024-07-26 05:24:25.941169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:06.913 [2024-07-26 05:24:25.941199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941235] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:06.913 [2024-07-26 05:24:25.941246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941266] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:06.913 [2024-07-26 05:24:25.941277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941297] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:06.913 [2024-07-26 05:24:25.941307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941327] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:06.913 [2024-07-26 05:24:25.941338] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:06.913 [2024-07-26 05:24:25.941349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:06.913 [2024-07-26 05:24:25.941371] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:06.913 [2024-07-26 05:24:25.941382] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:06.913 [2024-07-26 05:24:25.941392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:06.913 [2024-07-26 05:24:25.941402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:06.913 [2024-07-26 05:24:25.941412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:06.913 [2024-07-26 05:24:25.941431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:06.913 [2024-07-26 05:24:25.941443] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:06.913 [2024-07-26 05:24:25.941456] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.913 [2024-07-26 05:24:25.941470] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:06.913 [2024-07-26 05:24:25.941481] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941493] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:06.914 [2024-07-26 05:24:25.941517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:06.914 [2024-07-26 05:24:25.941529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:06.914 [2024-07-26 05:24:25.941540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:06.914 [2024-07-26 05:24:25.941552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:06.914 [2024-07-26 05:24:25.941621] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:06.914 [2024-07-26 05:24:25.941632] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:06.914 [2024-07-26 05:24:25.941644] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941661] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:06.914 [2024-07-26 05:24:25.941673] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:06.914 [2024-07-26 05:24:25.941685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:06.914 [2024-07-26 05:24:25.941697] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:06.914 [2024-07-26 05:24:25.941710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.914 [2024-07-26 05:24:25.941721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:06.914 [2024-07-26 05:24:25.941732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:27:06.914 [2024-07-26 05:24:25.941744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.914 [2024-07-26 05:24:25.973119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.914 [2024-07-26 05:24:25.973157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:06.914 [2024-07-26 05:24:25.973171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 31.318 ms 00:27:06.914 [2024-07-26 05:24:25.973183] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.914 [2024-07-26 05:24:25.973240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.914 [2024-07-26 05:24:25.973253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:06.914 [2024-07-26 05:24:25.973265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:06.914 [2024-07-26 05:24:25.973302] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.036213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.036250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:07.172 [2024-07-26 05:24:26.036270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 62.843 ms 00:27:07.172 [2024-07-26 05:24:26.036282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.036326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.036339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:07.172 [2024-07-26 05:24:26.036352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:07.172 [2024-07-26 05:24:26.036364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.037227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.037249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:07.172 [2024-07-26 05:24:26.037261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.807 ms 00:27:07.172 [2024-07-26 05:24:26.037279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.037325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.037337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:07.172 [2024-07-26 05:24:26.037349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:07.172 [2024-07-26 05:24:26.037361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.067888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.067925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:07.172 [2024-07-26 05:24:26.067941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.497 ms 00:27:07.172 [2024-07-26 05:24:26.067953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.089391] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:07.172 [2024-07-26 05:24:26.089448] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:07.172 [2024-07-26 05:24:26.089466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.089480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:07.172 [2024-07-26 05:24:26.089493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 21.386 ms 00:27:07.172 [2024-07-26 05:24:26.089505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.109520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.109576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:07.172 [2024-07-26 05:24:26.109607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.962 ms 00:27:07.172 [2024-07-26 05:24:26.109620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.128270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.128309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:07.172 [2024-07-26 05:24:26.128324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.598 ms 00:27:07.172 [2024-07-26 05:24:26.128336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.146640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.146679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:07.172 [2024-07-26 05:24:26.146694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.256 ms 00:27:07.172 [2024-07-26 05:24:26.146705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.147227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.147263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:07.172 [2024-07-26 05:24:26.147286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:27:07.172 [2024-07-26 05:24:26.147304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.242788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.242850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:07.172 [2024-07-26 05:24:26.242870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 95.442 ms 00:27:07.172 [2024-07-26 05:24:26.242884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.254342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:07.172 [2024-07-26 05:24:26.255327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.255351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:07.172 [2024-07-26 05:24:26.255367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.382 ms 00:27:07.172 [2024-07-26 05:24:26.255380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.172 [2024-07-26 05:24:26.255469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.172 [2024-07-26 05:24:26.255489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:07.173 [2024-07-26 05:24:26.255502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:07.173 [2024-07-26 05:24:26.255514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.173 [2024-07-26 05:24:26.255589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.173 [2024-07-26 05:24:26.255603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:07.173 [2024-07-26 05:24:26.255616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:07.173 [2024-07-26 05:24:26.255627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.173 [2024-07-26 05:24:26.258373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.173 [2024-07-26 05:24:26.258408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:07.173 [2024-07-26 05:24:26.258425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.717 ms 00:27:07.173 [2024-07-26 05:24:26.258438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.173 [2024-07-26 05:24:26.258473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.173 [2024-07-26 05:24:26.258486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:07.173 [2024-07-26 05:24:26.258499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:07.173 [2024-07-26 05:24:26.258510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.173 [2024-07-26 05:24:26.258564] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:07.173 [2024-07-26 05:24:26.258579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.173 [2024-07-26 05:24:26.258591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:07.173 [2024-07-26 05:24:26.258603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:07.173 [2024-07-26 05:24:26.258620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.433 [2024-07-26 05:24:26.295517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.433 [2024-07-26 05:24:26.295558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:07.433 [2024-07-26 05:24:26.295574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.870 ms 00:27:07.433 [2024-07-26 05:24:26.295587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.433 [2024-07-26 05:24:26.295671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.433 [2024-07-26 05:24:26.295686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:07.433 [2024-07-26 05:24:26.295707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:07.433 [2024-07-26 05:24:26.295719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.433 [2024-07-26 05:24:26.297334] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 401.653 ms, result 0 00:27:07.433 [2024-07-26 05:24:26.311903] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.433 [2024-07-26 05:24:26.327918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:07.433 [2024-07-26 05:24:26.338089] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:07.433 05:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:07.433 05:24:26 -- common/autotest_common.sh@852 -- # return 0 00:27:07.433 05:24:26 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:07.433 05:24:26 -- ftl/common.sh@95 -- # return 0 00:27:07.433 05:24:26 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:07.433 [2024-07-26 05:24:26.519168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.433 [2024-07-26 05:24:26.519344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:07.433 [2024-07-26 05:24:26.519456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:07.433 [2024-07-26 05:24:26.519497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.434 [2024-07-26 05:24:26.519557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.434 [2024-07-26 05:24:26.519593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:07.434 [2024-07-26 05:24:26.519627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:07.434 [2024-07-26 05:24:26.519660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.434 [2024-07-26 05:24:26.519770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.434 [2024-07-26 05:24:26.519810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:07.434 [2024-07-26 05:24:26.519844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:07.434 [2024-07-26 05:24:26.519885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.434 [2024-07-26 05:24:26.519970] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.787 ms, result 0 00:27:07.434 true 00:27:07.434 05:24:26 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:07.692 { 00:27:07.692 "name": "ftl", 00:27:07.692 "properties": [ 00:27:07.692 { 00:27:07.692 "name": "superblock_version", 00:27:07.692 "value": 5, 00:27:07.692 "read-only": true 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "name": "base_device", 00:27:07.692 "bands": [ 00:27:07.692 { 00:27:07.692 "id": 0, 00:27:07.692 "state": "CLOSED", 00:27:07.692 "validity": 1.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 1, 00:27:07.692 "state": "CLOSED", 00:27:07.692 "validity": 1.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 2, 00:27:07.692 "state": "CLOSED", 00:27:07.692 "validity": 0.007843137254901933 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 3, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 4, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 5, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 6, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 7, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 8, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 9, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 10, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 11, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 12, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 13, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 14, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 15, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 16, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 17, 00:27:07.692 "state": "FREE", 00:27:07.692 "validity": 0.0 00:27:07.692 } 00:27:07.692 ], 00:27:07.692 "read-only": true 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "name": "cache_device", 00:27:07.692 "type": "bdev", 00:27:07.692 "chunks": [ 00:27:07.692 { 00:27:07.692 "id": 0, 00:27:07.692 "state": "OPEN", 00:27:07.692 "utilization": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 1, 00:27:07.692 "state": "OPEN", 00:27:07.692 "utilization": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 2, 00:27:07.692 "state": "FREE", 00:27:07.692 "utilization": 0.0 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "id": 3, 00:27:07.692 "state": "FREE", 00:27:07.692 "utilization": 0.0 00:27:07.692 } 00:27:07.692 ], 00:27:07.692 "read-only": true 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "name": "verbose_mode", 00:27:07.692 "value": true, 00:27:07.692 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:07.692 }, 00:27:07.692 { 00:27:07.692 "name": "prep_upgrade_on_shutdown", 00:27:07.692 "value": false, 00:27:07.692 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:07.692 } 00:27:07.692 ] 00:27:07.692 } 00:27:07.692 05:24:26 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:07.692 05:24:26 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:07.692 05:24:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:07.950 05:24:26 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:07.950 05:24:26 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:07.950 05:24:26 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:07.950 05:24:26 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:07.950 05:24:26 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:08.208 Validate MD5 checksum, iteration 1 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:08.208 05:24:27 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:08.208 05:24:27 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:08.208 05:24:27 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:08.208 05:24:27 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:08.208 05:24:27 -- ftl/common.sh@154 -- # return 0 00:27:08.208 05:24:27 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:08.208 [2024-07-26 05:24:27.193653] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:08.208 [2024-07-26 05:24:27.193766] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78677 ] 00:27:08.467 [2024-07-26 05:24:27.361189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.725 [2024-07-26 05:24:27.684244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.101  Copying: 575/1024 [MB] (575 MBps) Copying: 1024/1024 [MB] (average 556 MBps) 00:27:13.101 00:27:13.101 05:24:31 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:13.101 05:24:31 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:15.000 Validate MD5 checksum, iteration 2 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@103 -- # sum=d660f06e13079e48fcb9125df20a1d44 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@105 -- # [[ d660f06e13079e48fcb9125df20a1d44 != \d\6\6\0\f\0\6\e\1\3\0\7\9\e\4\8\f\c\b\9\1\2\5\d\f\2\0\a\1\d\4\4 ]] 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:15.000 05:24:33 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.000 05:24:33 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.000 05:24:33 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.000 05:24:33 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.000 05:24:33 -- ftl/common.sh@154 -- # return 0 00:27:15.000 05:24:33 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:15.000 [2024-07-26 05:24:33.740402] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:15.000 [2024-07-26 05:24:33.740556] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78748 ] 00:27:15.000 [2024-07-26 05:24:33.927518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.258 [2024-07-26 05:24:34.204369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.019  Copying: 572/1024 [MB] (572 MBps) Copying: 1024/1024 [MB] (average 564 MBps) 00:27:21.019 00:27:21.019 05:24:39 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:21.019 05:24:39 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@103 -- # sum=66c8792160fdfee5cd3fa29219c5d0ee 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@105 -- # [[ 66c8792160fdfee5cd3fa29219c5d0ee != \6\6\c\8\7\9\2\1\6\0\f\d\f\e\e\5\c\d\3\f\a\2\9\2\1\9\c\5\d\0\e\e ]] 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:22.922 05:24:41 -- ftl/common.sh@137 -- # [[ -n 78627 ]] 00:27:22.922 05:24:41 -- ftl/common.sh@138 -- # kill -9 78627 00:27:22.922 05:24:41 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:22.922 05:24:41 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:22.922 05:24:41 -- ftl/common.sh@81 -- # local base_bdev= 00:27:22.922 05:24:41 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:22.922 05:24:41 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:22.922 05:24:41 -- ftl/common.sh@89 -- # spdk_tgt_pid=78834 00:27:22.922 05:24:41 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:22.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.922 05:24:41 -- ftl/common.sh@91 -- # waitforlisten 78834 00:27:22.922 05:24:41 -- common/autotest_common.sh@819 -- # '[' -z 78834 ']' 00:27:22.922 05:24:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.922 05:24:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:22.922 05:24:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.922 05:24:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:22.922 05:24:41 -- common/autotest_common.sh@10 -- # set +x 00:27:22.922 05:24:41 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.922 [2024-07-26 05:24:41.736297] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:22.922 [2024-07-26 05:24:41.736430] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78834 ] 00:27:22.922 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 818: 78627 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:22.922 [2024-07-26 05:24:41.905106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.182 [2024-07-26 05:24:42.165483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:23.182 [2024-07-26 05:24:42.165708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.560 [2024-07-26 05:24:43.342881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:24.560 [2024-07-26 05:24:43.342954] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:24.560 [2024-07-26 05:24:43.483683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.483731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:24.560 [2024-07-26 05:24:43.483748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:24.560 [2024-07-26 05:24:43.483759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.483815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.483836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:24.560 [2024-07-26 05:24:43.483847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:24.560 [2024-07-26 05:24:43.483860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.483883] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:24.560 [2024-07-26 05:24:43.484992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:24.560 [2024-07-26 05:24:43.485027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.485043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:24.560 [2024-07-26 05:24:43.485054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:27:24.560 [2024-07-26 05:24:43.485065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.485406] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:24.560 [2024-07-26 05:24:43.513276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.513318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:24.560 [2024-07-26 05:24:43.513333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.870 ms 00:27:24.560 [2024-07-26 05:24:43.513344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.527905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.527940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:24.560 [2024-07-26 05:24:43.527952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:24.560 [2024-07-26 05:24:43.527962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.528458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.528473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:24.560 [2024-07-26 05:24:43.528484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.421 ms 00:27:24.560 [2024-07-26 05:24:43.528494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.528536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.528549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:24.560 [2024-07-26 05:24:43.528559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:24.560 [2024-07-26 05:24:43.528569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.528599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.528610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:24.560 [2024-07-26 05:24:43.528620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:24.560 [2024-07-26 05:24:43.528630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.528656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:24.560 [2024-07-26 05:24:43.533390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.533420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:24.560 [2024-07-26 05:24:43.533439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.743 ms 00:27:24.560 [2024-07-26 05:24:43.533449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.533483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.533494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:24.560 [2024-07-26 05:24:43.533504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:24.560 [2024-07-26 05:24:43.533514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.533550] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:24.560 [2024-07-26 05:24:43.533577] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:24.560 [2024-07-26 05:24:43.533609] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:24.560 [2024-07-26 05:24:43.533634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:24.560 [2024-07-26 05:24:43.533700] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:24.560 [2024-07-26 05:24:43.533714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:24.560 [2024-07-26 05:24:43.533727] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:24.560 [2024-07-26 05:24:43.533748] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:24.560 [2024-07-26 05:24:43.533760] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:24.560 [2024-07-26 05:24:43.533771] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:24.560 [2024-07-26 05:24:43.533781] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:24.560 [2024-07-26 05:24:43.533790] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:24.560 [2024-07-26 05:24:43.533799] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:24.560 [2024-07-26 05:24:43.533809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.533819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:24.560 [2024-07-26 05:24:43.533829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:27:24.560 [2024-07-26 05:24:43.533839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.533893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.560 [2024-07-26 05:24:43.533906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:24.560 [2024-07-26 05:24:43.533917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:24.560 [2024-07-26 05:24:43.533926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.560 [2024-07-26 05:24:43.533991] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:24.560 [2024-07-26 05:24:43.534003] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:24.560 [2024-07-26 05:24:43.534013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:24.561 [2024-07-26 05:24:43.534043] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:24.561 [2024-07-26 05:24:43.534062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:24.561 [2024-07-26 05:24:43.534071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:24.561 [2024-07-26 05:24:43.534082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:24.561 [2024-07-26 05:24:43.534101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:24.561 [2024-07-26 05:24:43.534109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:24.561 [2024-07-26 05:24:43.534128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534145] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:24.561 [2024-07-26 05:24:43.534154] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:24.561 [2024-07-26 05:24:43.534163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:24.561 [2024-07-26 05:24:43.534180] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:24.561 [2024-07-26 05:24:43.534189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534198] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:24.561 [2024-07-26 05:24:43.534218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:24.561 [2024-07-26 05:24:43.534246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:24.561 [2024-07-26 05:24:43.534271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:24.561 [2024-07-26 05:24:43.534297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:24.561 [2024-07-26 05:24:43.534323] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:24.561 [2024-07-26 05:24:43.534349] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534366] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:24.561 [2024-07-26 05:24:43.534376] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:24.561 [2024-07-26 05:24:43.534389] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:24.561 [2024-07-26 05:24:43.534409] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:24.561 [2024-07-26 05:24:43.534418] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:24.561 [2024-07-26 05:24:43.534427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:24.561 [2024-07-26 05:24:43.534437] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:24.561 [2024-07-26 05:24:43.534446] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:24.561 [2024-07-26 05:24:43.534455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:24.561 [2024-07-26 05:24:43.534465] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:24.561 [2024-07-26 05:24:43.534476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:24.561 [2024-07-26 05:24:43.534496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534516] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:24.561 [2024-07-26 05:24:43.534527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:24.561 [2024-07-26 05:24:43.534537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:24.561 [2024-07-26 05:24:43.534547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:24.561 [2024-07-26 05:24:43.534557] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:24.561 [2024-07-26 05:24:43.534626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:24.561 [2024-07-26 05:24:43.534635] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:24.561 [2024-07-26 05:24:43.534647] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534658] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:24.561 [2024-07-26 05:24:43.534668] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:24.561 [2024-07-26 05:24:43.534678] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:24.561 [2024-07-26 05:24:43.534689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:24.561 [2024-07-26 05:24:43.534700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.534710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:24.561 [2024-07-26 05:24:43.534723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 00:27:24.561 [2024-07-26 05:24:43.534733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.561553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.561736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:24.561 [2024-07-26 05:24:43.561842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.767 ms 00:27:24.561 [2024-07-26 05:24:43.561882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.561945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.561982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:24.561 [2024-07-26 05:24:43.562135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:24.561 [2024-07-26 05:24:43.562171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.621637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.621790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:24.561 [2024-07-26 05:24:43.621895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.372 ms 00:27:24.561 [2024-07-26 05:24:43.621931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.622005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.622039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:24.561 [2024-07-26 05:24:43.622076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:24.561 [2024-07-26 05:24:43.622106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.622253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.622316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:24.561 [2024-07-26 05:24:43.622382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:27:24.561 [2024-07-26 05:24:43.622411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.622479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.622513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:24.561 [2024-07-26 05:24:43.622543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:24.561 [2024-07-26 05:24:43.622578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.651873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.652014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:24.561 [2024-07-26 05:24:43.652087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 29.252 ms 00:27:24.561 [2024-07-26 05:24:43.652128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.561 [2024-07-26 05:24:43.652293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.561 [2024-07-26 05:24:43.652340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:24.561 [2024-07-26 05:24:43.652372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:24.561 [2024-07-26 05:24:43.652401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.679774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.679939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:24.820 [2024-07-26 05:24:43.680032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.260 ms 00:27:24.820 [2024-07-26 05:24:43.680069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.693872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.694018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:24.820 [2024-07-26 05:24:43.694039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:27:24.820 [2024-07-26 05:24:43.694052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.788448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.788530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:24.820 [2024-07-26 05:24:43.788547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 94.331 ms 00:27:24.820 [2024-07-26 05:24:43.788558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.788650] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:24.820 [2024-07-26 05:24:43.788693] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:24.820 [2024-07-26 05:24:43.788729] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:24.820 [2024-07-26 05:24:43.788765] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:24.820 [2024-07-26 05:24:43.788775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.788786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:24.820 [2024-07-26 05:24:43.788797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:27:24.820 [2024-07-26 05:24:43.788812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.788877] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:24.820 [2024-07-26 05:24:43.788895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.788905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:24.820 [2024-07-26 05:24:43.788916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:24.820 [2024-07-26 05:24:43.788927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.811775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.811815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:24.820 [2024-07-26 05:24:43.811828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.822 ms 00:27:24.820 [2024-07-26 05:24:43.811838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.824535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.824567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:24.820 [2024-07-26 05:24:43.824579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:24.820 [2024-07-26 05:24:43.824589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.824645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.820 [2024-07-26 05:24:43.824658] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:24.820 [2024-07-26 05:24:43.824674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:24.820 [2024-07-26 05:24:43.824684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.820 [2024-07-26 05:24:43.825030] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:25.388 [2024-07-26 05:24:44.418601] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:25.388 [2024-07-26 05:24:44.418912] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:25.957 [2024-07-26 05:24:45.002700] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:25.957 [2024-07-26 05:24:45.002832] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:25.957 [2024-07-26 05:24:45.002849] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:25.957 [2024-07-26 05:24:45.002867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.002879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:25.957 [2024-07-26 05:24:45.002897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1178.152 ms 00:27:25.957 [2024-07-26 05:24:45.002910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.002947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.002959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:25.957 [2024-07-26 05:24:45.002989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:25.957 [2024-07-26 05:24:45.003000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.016247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:25.957 [2024-07-26 05:24:45.016387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.016401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:25.957 [2024-07-26 05:24:45.016414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.368 ms 00:27:25.957 [2024-07-26 05:24:45.016424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.017002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.017023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:25.957 [2024-07-26 05:24:45.017035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.504 ms 00:27:25.957 [2024-07-26 05:24:45.017049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.019013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.019039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:25.957 [2024-07-26 05:24:45.019051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.946 ms 00:27:25.957 [2024-07-26 05:24:45.019061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.056956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.056992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:25.957 [2024-07-26 05:24:45.057012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 37.869 ms 00:27:25.957 [2024-07-26 05:24:45.057023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.057139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.057153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:25.957 [2024-07-26 05:24:45.057165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:25.957 [2024-07-26 05:24:45.057175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.957 [2024-07-26 05:24:45.059672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.957 [2024-07-26 05:24:45.059701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:25.958 [2024-07-26 05:24:45.059712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.479 ms 00:27:25.958 [2024-07-26 05:24:45.059726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.958 [2024-07-26 05:24:45.059757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.958 [2024-07-26 05:24:45.059768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:25.958 [2024-07-26 05:24:45.059779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:25.958 [2024-07-26 05:24:45.059788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.958 [2024-07-26 05:24:45.059826] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:25.958 [2024-07-26 05:24:45.059838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.958 [2024-07-26 05:24:45.059848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:25.958 [2024-07-26 05:24:45.059859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:25.958 [2024-07-26 05:24:45.059869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.958 [2024-07-26 05:24:45.059926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:25.958 [2024-07-26 05:24:45.059937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:25.958 [2024-07-26 05:24:45.059947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:25.958 [2024-07-26 05:24:45.059956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:25.958 [2024-07-26 05:24:45.061586] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1577.105 ms, result 0 00:27:26.218 [2024-07-26 05:24:45.074079] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.218 [2024-07-26 05:24:45.090044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:26.218 [2024-07-26 05:24:45.100249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:26.477 05:24:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:26.477 05:24:45 -- common/autotest_common.sh@852 -- # return 0 00:27:26.478 05:24:45 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:26.478 05:24:45 -- ftl/common.sh@95 -- # return 0 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:26.478 Validate MD5 checksum, iteration 1 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:26.478 05:24:45 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:26.478 05:24:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.478 05:24:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.478 05:24:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.478 05:24:45 -- ftl/common.sh@154 -- # return 0 00:27:26.478 05:24:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:26.737 [2024-07-26 05:24:45.614436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:26.737 [2024-07-26 05:24:45.614595] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78886 ] 00:27:26.737 [2024-07-26 05:24:45.795925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.997 [2024-07-26 05:24:46.070468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.371  Copying: 631/1024 [MB] (631 MBps) Copying: 1024/1024 [MB] (average 613 MBps) 00:27:31.371 00:27:31.371 05:24:50 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:31.371 05:24:50 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:33.275 Validate MD5 checksum, iteration 2 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@103 -- # sum=d660f06e13079e48fcb9125df20a1d44 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@105 -- # [[ d660f06e13079e48fcb9125df20a1d44 != \d\6\6\0\f\0\6\e\1\3\0\7\9\e\4\8\f\c\b\9\1\2\5\d\f\2\0\a\1\d\4\4 ]] 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:33.275 05:24:51 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:33.275 05:24:51 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:33.275 05:24:51 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:33.275 05:24:51 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:33.275 05:24:51 -- ftl/common.sh@154 -- # return 0 00:27:33.275 05:24:51 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:33.275 [2024-07-26 05:24:51.992432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:33.275 [2024-07-26 05:24:51.992775] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78953 ] 00:27:33.275 [2024-07-26 05:24:52.175545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.534 [2024-07-26 05:24:52.456889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.600  Copying: 648/1024 [MB] (648 MBps) Copying: 1024/1024 [MB] (average 646 MBps) 00:27:37.600 00:27:37.600 05:24:56 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:37.600 05:24:56 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@103 -- # sum=66c8792160fdfee5cd3fa29219c5d0ee 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@105 -- # [[ 66c8792160fdfee5cd3fa29219c5d0ee != \6\6\c\8\7\9\2\1\6\0\f\d\f\e\e\5\c\d\3\f\a\2\9\2\1\9\c\5\d\0\e\e ]] 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:39.506 05:24:58 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:39.506 05:24:58 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:39.506 05:24:58 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:39.506 05:24:58 -- ftl/common.sh@130 -- # [[ -n 78834 ]] 00:27:39.506 05:24:58 -- ftl/common.sh@131 -- # killprocess 78834 00:27:39.506 05:24:58 -- common/autotest_common.sh@926 -- # '[' -z 78834 ']' 00:27:39.506 05:24:58 -- common/autotest_common.sh@930 -- # kill -0 78834 00:27:39.506 05:24:58 -- common/autotest_common.sh@931 -- # uname 00:27:39.506 05:24:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:39.506 05:24:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78834 00:27:39.506 killing process with pid 78834 00:27:39.506 05:24:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:39.506 05:24:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:39.506 05:24:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78834' 00:27:39.506 05:24:58 -- common/autotest_common.sh@945 -- # kill 78834 00:27:39.506 05:24:58 -- common/autotest_common.sh@950 -- # wait 78834 00:27:40.885 [2024-07-26 05:24:59.655921] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:40.885 [2024-07-26 05:24:59.675650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.885 [2024-07-26 05:24:59.675689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:40.885 [2024-07-26 05:24:59.675704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:40.886 [2024-07-26 05:24:59.675734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.675757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:40.886 [2024-07-26 05:24:59.679384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.679410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:40.886 [2024-07-26 05:24:59.679422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.613 ms 00:27:40.886 [2024-07-26 05:24:59.679432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.679642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.679654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:40.886 [2024-07-26 05:24:59.679665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:27:40.886 [2024-07-26 05:24:59.679674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.680993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.681028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:40.886 [2024-07-26 05:24:59.681040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.303 ms 00:27:40.886 [2024-07-26 05:24:59.681050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.682023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.682051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:40.886 [2024-07-26 05:24:59.682063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.942 ms 00:27:40.886 [2024-07-26 05:24:59.682073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.698585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.698733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:40.886 [2024-07-26 05:24:59.698815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.455 ms 00:27:40.886 [2024-07-26 05:24:59.698851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.717096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.717270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:40.886 [2024-07-26 05:24:59.717375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.186 ms 00:27:40.886 [2024-07-26 05:24:59.717422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.717570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.717634] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:40.886 [2024-07-26 05:24:59.717739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:40.886 [2024-07-26 05:24:59.717783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.734373] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.734501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:40.886 [2024-07-26 05:24:59.734619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.540 ms 00:27:40.886 [2024-07-26 05:24:59.734656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.749801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.749938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:40.886 [2024-07-26 05:24:59.750045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.088 ms 00:27:40.886 [2024-07-26 05:24:59.750082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.765225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.765371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:40.886 [2024-07-26 05:24:59.765391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.043 ms 00:27:40.886 [2024-07-26 05:24:59.765401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.780142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.780172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:40.886 [2024-07-26 05:24:59.780183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.673 ms 00:27:40.886 [2024-07-26 05:24:59.780192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.780233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:40.886 [2024-07-26 05:24:59.780249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:40.886 [2024-07-26 05:24:59.780261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:40.886 [2024-07-26 05:24:59.780272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:40.886 [2024-07-26 05:24:59.780282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:40.886 [2024-07-26 05:24:59.780475] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:40.886 [2024-07-26 05:24:59.780498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 414ebf38-7d95-4a7a-9a1a-529f611a254d 00:27:40.886 [2024-07-26 05:24:59.780509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:40.886 [2024-07-26 05:24:59.780519] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:40.886 [2024-07-26 05:24:59.780528] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:40.886 [2024-07-26 05:24:59.780542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:40.886 [2024-07-26 05:24:59.780551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:40.886 [2024-07-26 05:24:59.780561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:40.886 [2024-07-26 05:24:59.780570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:40.886 [2024-07-26 05:24:59.780579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:40.886 [2024-07-26 05:24:59.780588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:40.886 [2024-07-26 05:24:59.780597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.780607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:40.886 [2024-07-26 05:24:59.780617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:27:40.886 [2024-07-26 05:24:59.780644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.799726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.799762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:40.886 [2024-07-26 05:24:59.799774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.064 ms 00:27:40.886 [2024-07-26 05:24:59.799784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.800018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.886 [2024-07-26 05:24:59.800028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:40.886 [2024-07-26 05:24:59.800038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.214 ms 00:27:40.886 [2024-07-26 05:24:59.800047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.863510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:40.886 [2024-07-26 05:24:59.863548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:40.886 [2024-07-26 05:24:59.863560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:40.886 [2024-07-26 05:24:59.863570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.863600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:40.886 [2024-07-26 05:24:59.863610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:40.886 [2024-07-26 05:24:59.863620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:40.886 [2024-07-26 05:24:59.863629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.863697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:40.886 [2024-07-26 05:24:59.863709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:40.886 [2024-07-26 05:24:59.863724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:40.886 [2024-07-26 05:24:59.863733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.886 [2024-07-26 05:24:59.863751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:40.886 [2024-07-26 05:24:59.863760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:40.886 [2024-07-26 05:24:59.863769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:40.886 [2024-07-26 05:24:59.863778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.887 [2024-07-26 05:24:59.980863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:40.887 [2024-07-26 05:24:59.980923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:40.887 [2024-07-26 05:24:59.980936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:40.887 [2024-07-26 05:24:59.980963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.145 [2024-07-26 05:25:00.031809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.145 [2024-07-26 05:25:00.031869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:41.145 [2024-07-26 05:25:00.031884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.145 [2024-07-26 05:25:00.031911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.145 [2024-07-26 05:25:00.032000] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.145 [2024-07-26 05:25:00.032013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:41.145 [2024-07-26 05:25:00.032023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.145 [2024-07-26 05:25:00.032043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.146 [2024-07-26 05:25:00.032100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:41.146 [2024-07-26 05:25:00.032110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.146 [2024-07-26 05:25:00.032119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.146 [2024-07-26 05:25:00.032264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:41.146 [2024-07-26 05:25:00.032275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.146 [2024-07-26 05:25:00.032285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.146 [2024-07-26 05:25:00.032341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:41.146 [2024-07-26 05:25:00.032352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.146 [2024-07-26 05:25:00.032362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.146 [2024-07-26 05:25:00.032410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.146 [2024-07-26 05:25:00.032420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.146 [2024-07-26 05:25:00.032429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:41.146 [2024-07-26 05:25:00.032492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.146 [2024-07-26 05:25:00.032503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:41.146 [2024-07-26 05:25:00.032512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.146 [2024-07-26 05:25:00.032639] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 356.950 ms, result 0 00:27:42.536 05:25:01 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:42.536 05:25:01 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:42.536 05:25:01 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:42.536 05:25:01 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:42.536 05:25:01 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:42.536 05:25:01 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:42.536 Remove shared memory files 00:27:42.536 05:25:01 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:42.536 05:25:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:42.536 05:25:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:42.536 05:25:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:42.536 05:25:01 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78627 00:27:42.536 05:25:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:42.536 05:25:01 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:42.536 ************************************ 00:27:42.536 END TEST ftl_upgrade_shutdown 00:27:42.536 ************************************ 00:27:42.536 00:27:42.536 real 1m30.591s 00:27:42.536 user 2m7.036s 00:27:42.536 sys 0m23.765s 00:27:42.536 05:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.536 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:27:42.536 05:25:01 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:42.536 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:42.536 05:25:01 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:42.536 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:42.536 Process with pid 71830 is not found 00:27:42.536 05:25:01 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:42.536 05:25:01 -- ftl/ftl.sh@14 -- # killprocess 71830 00:27:42.536 05:25:01 -- common/autotest_common.sh@926 -- # '[' -z 71830 ']' 00:27:42.536 05:25:01 -- common/autotest_common.sh@930 -- # kill -0 71830 00:27:42.536 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71830) - No such process 00:27:42.536 05:25:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71830 is not found' 00:27:42.536 05:25:01 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:27:42.536 05:25:01 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79083 00:27:42.536 05:25:01 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:42.536 05:25:01 -- ftl/ftl.sh@20 -- # waitforlisten 79083 00:27:42.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.537 05:25:01 -- common/autotest_common.sh@819 -- # '[' -z 79083 ']' 00:27:42.537 05:25:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.537 05:25:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:42.537 05:25:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.537 05:25:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:42.537 05:25:01 -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 [2024-07-26 05:25:01.450632] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:42.537 [2024-07-26 05:25:01.450769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79083 ] 00:27:42.537 [2024-07-26 05:25:01.610254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.848 [2024-07-26 05:25:01.845846] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:42.848 [2024-07-26 05:25:01.846035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.224 05:25:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:44.224 05:25:03 -- common/autotest_common.sh@852 -- # return 0 00:27:44.224 05:25:03 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:27:44.224 nvme0n1 00:27:44.224 05:25:03 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:44.224 05:25:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:44.224 05:25:03 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:44.482 05:25:03 -- ftl/common.sh@28 -- # stores=a204e1e9-1c8a-4ec2-8656-fac2fa71ba32 00:27:44.482 05:25:03 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:44.482 05:25:03 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a204e1e9-1c8a-4ec2-8656-fac2fa71ba32 00:27:44.739 05:25:03 -- ftl/ftl.sh@23 -- # killprocess 79083 00:27:44.739 05:25:03 -- common/autotest_common.sh@926 -- # '[' -z 79083 ']' 00:27:44.739 05:25:03 -- common/autotest_common.sh@930 -- # kill -0 79083 00:27:44.739 05:25:03 -- common/autotest_common.sh@931 -- # uname 00:27:44.739 05:25:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:44.739 05:25:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79083 00:27:44.739 killing process with pid 79083 00:27:44.739 05:25:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:44.739 05:25:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:44.739 05:25:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79083' 00:27:44.739 05:25:03 -- common/autotest_common.sh@945 -- # kill 79083 00:27:44.739 05:25:03 -- common/autotest_common.sh@950 -- # wait 79083 00:27:47.268 05:25:06 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:47.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:47.527 Waiting for block devices as requested 00:27:47.527 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.527 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.786 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:47.786 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.063 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:27:53.063 Remove shared memory files 00:27:53.063 05:25:11 -- ftl/ftl.sh@28 -- # remove_shm 00:27:53.063 05:25:11 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:53.063 05:25:11 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:53.063 05:25:11 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:53.063 05:25:11 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:53.063 05:25:11 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:53.063 05:25:11 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:53.063 ************************************ 00:27:53.063 END TEST ftl 00:27:53.063 ************************************ 00:27:53.063 00:27:53.063 real 10m30.337s 00:27:53.063 user 13m5.895s 00:27:53.063 sys 1m28.043s 00:27:53.063 05:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.063 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.063 05:25:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:53.063 05:25:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:53.063 05:25:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:53.063 05:25:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:53.063 05:25:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:53.063 05:25:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:53.063 05:25:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:53.063 05:25:11 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:27:53.063 05:25:11 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:27:53.063 05:25:11 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:27:53.063 05:25:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:53.064 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.064 05:25:11 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:27:53.064 05:25:11 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:27:53.064 05:25:11 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:27:53.064 05:25:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.989 INFO: APP EXITING 00:27:54.989 INFO: killing all VMs 00:27:54.989 INFO: killing vhost app 00:27:54.989 INFO: EXIT DONE 00:27:55.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:55.927 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:27:55.927 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:27:55.927 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:55.927 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:56.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:56.866 Cleaning 00:27:56.866 Removing: /var/run/dpdk/spdk0/config 00:27:56.866 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:56.866 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:56.866 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:56.866 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:56.866 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:56.866 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:56.866 Removing: /var/run/dpdk/spdk0 00:27:56.866 Removing: /var/run/dpdk/spdk_pid56251 00:27:56.866 Removing: /var/run/dpdk/spdk_pid56488 00:27:56.866 Removing: /var/run/dpdk/spdk_pid56793 00:27:56.866 Removing: /var/run/dpdk/spdk_pid56898 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57003 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57137 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57238 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57283 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57325 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57392 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57509 00:27:56.866 Removing: /var/run/dpdk/spdk_pid57956 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58033 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58120 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58140 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58309 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58335 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58504 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58533 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58602 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58629 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58693 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58724 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58921 00:27:56.866 Removing: /var/run/dpdk/spdk_pid58963 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59043 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59126 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59168 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59246 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59283 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59326 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59363 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59415 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59447 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59499 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59530 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59577 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59614 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59666 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59692 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59744 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59780 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59828 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59859 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59906 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59943 00:27:56.866 Removing: /var/run/dpdk/spdk_pid59985 00:27:56.866 Removing: /var/run/dpdk/spdk_pid60021 00:27:56.866 Removing: /var/run/dpdk/spdk_pid60073 00:27:56.866 Removing: /var/run/dpdk/spdk_pid60099 00:27:56.866 Removing: /var/run/dpdk/spdk_pid60151 00:27:56.866 Removing: /var/run/dpdk/spdk_pid60188 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60239 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60272 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60324 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60356 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60402 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60439 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60490 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60523 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60575 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60604 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60659 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60699 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60749 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60780 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60831 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60864 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60917 00:27:57.126 Removing: /var/run/dpdk/spdk_pid60998 00:27:57.126 Removing: /var/run/dpdk/spdk_pid61123 00:27:57.126 Removing: /var/run/dpdk/spdk_pid61297 00:27:57.126 Removing: /var/run/dpdk/spdk_pid61405 00:27:57.126 Removing: /var/run/dpdk/spdk_pid61447 00:27:57.126 Removing: /var/run/dpdk/spdk_pid61922 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62032 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62141 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62200 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62231 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62306 00:27:57.126 Removing: /var/run/dpdk/spdk_pid62998 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63040 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63551 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63666 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63781 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63840 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63871 00:27:57.126 Removing: /var/run/dpdk/spdk_pid63902 00:27:57.126 Removing: /var/run/dpdk/spdk_pid65856 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66012 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66016 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66033 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66080 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66084 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66102 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66146 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66156 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66168 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66207 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66211 00:27:57.126 Removing: /var/run/dpdk/spdk_pid66234 00:27:57.126 Removing: /var/run/dpdk/spdk_pid67646 00:27:57.126 Removing: /var/run/dpdk/spdk_pid67759 00:27:57.126 Removing: /var/run/dpdk/spdk_pid67892 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68008 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68123 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68227 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68364 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68446 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68590 00:27:57.126 Removing: /var/run/dpdk/spdk_pid68990 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69032 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69507 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69700 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69804 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69918 00:27:57.126 Removing: /var/run/dpdk/spdk_pid69977 00:27:57.126 Removing: /var/run/dpdk/spdk_pid70008 00:27:57.126 Removing: /var/run/dpdk/spdk_pid70318 00:27:57.126 Removing: /var/run/dpdk/spdk_pid70390 00:27:57.126 Removing: /var/run/dpdk/spdk_pid70477 00:27:57.126 Removing: /var/run/dpdk/spdk_pid70871 00:27:57.126 Removing: /var/run/dpdk/spdk_pid71031 00:27:57.126 Removing: /var/run/dpdk/spdk_pid71830 00:27:57.126 Removing: /var/run/dpdk/spdk_pid71970 00:27:57.126 Removing: /var/run/dpdk/spdk_pid72182 00:27:57.386 Removing: /var/run/dpdk/spdk_pid72290 00:27:57.386 Removing: /var/run/dpdk/spdk_pid72610 00:27:57.386 Removing: /var/run/dpdk/spdk_pid72851 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73250 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73484 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73611 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73688 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73809 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73851 00:27:57.386 Removing: /var/run/dpdk/spdk_pid73926 00:27:57.386 Removing: /var/run/dpdk/spdk_pid74118 00:27:57.386 Removing: /var/run/dpdk/spdk_pid74377 00:27:57.386 Removing: /var/run/dpdk/spdk_pid74736 00:27:57.386 Removing: /var/run/dpdk/spdk_pid75111 00:27:57.386 Removing: /var/run/dpdk/spdk_pid75485 00:27:57.386 Removing: /var/run/dpdk/spdk_pid75918 00:27:57.386 Removing: /var/run/dpdk/spdk_pid76061 00:27:57.386 Removing: /var/run/dpdk/spdk_pid76155 00:27:57.386 Removing: /var/run/dpdk/spdk_pid76752 00:27:57.386 Removing: /var/run/dpdk/spdk_pid76827 00:27:57.386 Removing: /var/run/dpdk/spdk_pid77210 00:27:57.386 Removing: /var/run/dpdk/spdk_pid77548 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78001 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78125 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78186 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78259 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78316 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78386 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78627 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78677 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78748 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78834 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78886 00:27:57.386 Removing: /var/run/dpdk/spdk_pid78953 00:27:57.386 Removing: /var/run/dpdk/spdk_pid79083 00:27:57.386 Clean 00:27:57.386 killing process with pid 48351 00:27:57.386 killing process with pid 48359 00:27:57.386 05:25:16 -- common/autotest_common.sh@1436 -- # return 0 00:27:57.386 05:25:16 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:27:57.386 05:25:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.386 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.645 05:25:16 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:27:57.645 05:25:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.645 05:25:16 -- common/autotest_common.sh@10 -- # set +x 00:27:57.645 05:25:16 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:57.645 05:25:16 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:57.645 05:25:16 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:57.645 05:25:16 -- spdk/autotest.sh@394 -- # hash lcov 00:27:57.645 05:25:16 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:57.645 05:25:16 -- spdk/autotest.sh@396 -- # hostname 00:27:57.645 05:25:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:57.904 geninfo: WARNING: invalid characters removed from testname! 00:28:19.839 05:25:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:21.760 05:25:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:23.671 05:25:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:25.572 05:25:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:27.477 05:25:46 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:29.383 05:25:48 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.290 05:25:50 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:31.290 05:25:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:31.290 05:25:50 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:31.290 05:25:50 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.290 05:25:50 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.290 05:25:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.290 05:25:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.290 05:25:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.290 05:25:50 -- paths/export.sh@5 -- $ export PATH 00:28:31.290 05:25:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.290 05:25:50 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:31.290 05:25:50 -- common/autobuild_common.sh@438 -- $ date +%s 00:28:31.290 05:25:50 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721971550.XXXXXX 00:28:31.290 05:25:50 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721971550.SH2hcr 00:28:31.290 05:25:50 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:28:31.290 05:25:50 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:28:31.291 05:25:50 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:31.291 05:25:50 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:31.291 05:25:50 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:31.291 05:25:50 -- common/autobuild_common.sh@454 -- $ get_config_params 00:28:31.291 05:25:50 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:28:31.291 05:25:50 -- common/autotest_common.sh@10 -- $ set +x 00:28:31.291 05:25:50 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:31.291 05:25:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:31.291 05:25:50 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:31.291 05:25:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:31.291 05:25:50 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:31.291 05:25:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:31.291 05:25:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:31.291 05:25:50 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:31.291 05:25:50 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:31.291 05:25:50 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:31.291 05:25:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:31.291 + [[ -n 5161 ]] 00:28:31.291 + sudo kill 5161 00:28:31.303 [Pipeline] } 00:28:31.327 [Pipeline] // timeout 00:28:31.334 [Pipeline] } 00:28:31.352 [Pipeline] // stage 00:28:31.358 [Pipeline] } 00:28:31.376 [Pipeline] // catchError 00:28:31.388 [Pipeline] stage 00:28:31.390 [Pipeline] { (Stop VM) 00:28:31.407 [Pipeline] sh 00:28:31.689 + vagrant halt 00:28:34.225 ==> default: Halting domain... 00:28:40.805 [Pipeline] sh 00:28:41.089 + vagrant destroy -f 00:28:43.623 ==> default: Removing domain... 00:28:44.203 [Pipeline] sh 00:28:44.484 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:28:44.491 [Pipeline] } 00:28:44.505 [Pipeline] // stage 00:28:44.510 [Pipeline] } 00:28:44.525 [Pipeline] // dir 00:28:44.530 [Pipeline] } 00:28:44.545 [Pipeline] // wrap 00:28:44.552 [Pipeline] } 00:28:44.566 [Pipeline] // catchError 00:28:44.576 [Pipeline] stage 00:28:44.578 [Pipeline] { (Epilogue) 00:28:44.592 [Pipeline] sh 00:28:44.874 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:50.162 [Pipeline] catchError 00:28:50.164 [Pipeline] { 00:28:50.180 [Pipeline] sh 00:28:50.464 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:50.465 Artifacts sizes are good 00:28:50.475 [Pipeline] } 00:28:50.493 [Pipeline] // catchError 00:28:50.505 [Pipeline] archiveArtifacts 00:28:50.512 Archiving artifacts 00:28:50.660 [Pipeline] cleanWs 00:28:50.673 [WS-CLEANUP] Deleting project workspace... 00:28:50.673 [WS-CLEANUP] Deferred wipeout is used... 00:28:50.680 [WS-CLEANUP] done 00:28:50.682 [Pipeline] } 00:28:50.700 [Pipeline] // stage 00:28:50.707 [Pipeline] } 00:28:50.725 [Pipeline] // node 00:28:50.731 [Pipeline] End of Pipeline 00:28:50.773 Finished: SUCCESS